CN111563844A - Adaptive fusion method for remote sensing image - Google Patents

Adaptive fusion method for remote sensing image Download PDF

Info

Publication number
CN111563844A
CN111563844A CN202010400074.XA CN202010400074A CN111563844A CN 111563844 A CN111563844 A CN 111563844A CN 202010400074 A CN202010400074 A CN 202010400074A CN 111563844 A CN111563844 A CN 111563844A
Authority
CN
China
Prior art keywords
resolution
image
low
dictionary
patch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010400074.XA
Other languages
Chinese (zh)
Inventor
曹飞龙
应汉驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN202010400074.XA priority Critical patent/CN111563844A/en
Publication of CN111563844A publication Critical patent/CN111563844A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

The invention discloses a remote sensing image fusion method, which is used for carrying out difference on high-resolution and low-resolution remote sensing images at known moments to obtain a difference image. And extracting patch blocks with the same size from corresponding positions of the high-resolution difference image and the low-resolution difference image which participate in the training, wherein the patch blocks form a training set. And training to obtain high-low resolution dictionary pairs by utilizing the constructed training set. And extracting a patch block from the target low-resolution difference image, and coding the patch block by using a low-resolution dictionary under the condition of considering the structural similarity and dictionary correlation among the bands to obtain a corresponding sparse representation coefficient. And multiplying the sparse representation coefficient by the high-resolution dictionary to obtain a patch block of the high-resolution difference image at the corresponding moment. And fusing all the high-resolution patch blocks to obtain a high-resolution difference image. And performing weighted sum on the two difference images to obtain the final fused high-resolution remote sensing image at the target moment. The method can effectively utilize structural information between wave bands, simultaneously considers the sparsity and the correlation of the dictionary pairs, and predicts and obtains the remote sensing image closer to real data.

Description

Adaptive fusion method for remote sensing image
Technical Field
The invention belongs to the field of image processing, and particularly relates to a remote sensing image fusion method.
Background
The remote sensing satellite is an important tool for monitoring ground changes, and plays a great role in observing vegetation and ground feature changes. However, due to scientific limitations, the design of remote sensing satellite sensors has to compromise between spatial and temporal resolution. For example, a remote sensing image with a spatial resolution of 500 meters can be obtained by a MODIS satellite every day. In contrast, the spatial resolution of the Landsat ETM + satellite is 30 meters, but its acquisition period is 16 days. This limitation between spatial and temporal resolution prevents the use of remote sensing in practical situations, especially when images with both high temporal and high spatial resolution are required. Therefore, a spatiotemporal reflectivity fusion model is proposed to combine the data of different sensors to obtain images with high spatiotemporal resolution.
The space-time adaptive reflectivity fusion model is the earliest proposed fusion model based on a weighting method. The model calculates the value of the intermediate pixel point by using the adjacent pixel points of the image at a certain moment through a weighting function, and the weight is determined by calculating the spectral difference, the time difference and the spatial distance. Subsequently, Zhu et al and Hilker et al made improvements on the basis of the STARFM algorithm, and promoted the fusion effect. However, the effect of the weighting method has a limitation because linear combination smoothly changes the surface information.
Another type of reflectivity fusion method is called dictionary learning method, and this type of method is proposed to overcome the shortcomings of the weighting method. Due to the prominent expression of the dictionary learning and sparse representation method in the image super-resolution reconstruction algorithm, in 2012, Huang et al propose a sparse representation-based spatio-temporal reflectivity fusion model. The model combines the sparse representation with the reflectivity fusion problem, and establishes the corresponding relation between high-resolution image pairs and low-resolution image pairs through dictionary pairs and sparse coding. The advantage of dictionary learning methods over weighting methods is that they can reveal hidden relationships between pairs of images from the sparse coding space to better extract information of image structure variations.
However, the conventional dictionary learning method does not consider the structural similarity of the images in the fusion stage. Although different wavelength bands have different ranges of reflectivity, the edge information is still similar. In addition to this, dictionary pairs obtained by the conventional dictionary learning method cannot effectively predict images at desired times because of lack of purpose of the dictionary pairsImage information of a target time. Regularization constraint usage l in the fusion process1The norm only considers the sparsity of the dictionary, and the influence of the dictionary on insufficient information cannot be reduced.
Disclosure of Invention
In view of the above, in order to solve the above-mentioned problems in the prior art, the present invention provides a remote sensing image adaptive fusion method capable of realizing high resolution remote sensing image prediction.
The technical proposal of the invention is to provide a remote sensing image self-adaptive fusion method,
the MODIS image is used as a low-resolution image, and the Landsat ETM + image is used as a high-resolution image. Known data as t1、t2And t3(t1<t2<t3) MODIS image M of time1、M2And M3And t is1And t3Landsat ETM + image L at time1And L3Need to predict t2Landsat image L of time2(ii) a The method comprises the following steps:
a. interpolating the low-resolution image to the same size as the high-resolution image, performing difference processing on the low-resolution image and the high-resolution image respectively to obtain a low-resolution difference image and a high-resolution difference image, and performing interpolation at t1And t3Respectively extracting patch blocks with the size of p × p from corresponding positions in the low-resolution difference image and the high-resolution difference image at the moment, wherein the obtained high-resolution patch blocks and the low-resolution patch blocks form a training set;
b. b, training to obtain high and low resolution dictionary pairs according to the training set obtained in the step a;
c. from t1And t2Time low resolution difference image M21Dividing the block into p × p patches according to the overlap coefficient v, coding the patch block according to the low resolution dictionary obtained in step b to obtain corresponding sparse representation coefficient, multiplying the sparse representation coefficient by the high resolution dictionary to obtain corresponding high resolution patch block, and fusing all the high resolution patch blocks to obtain t1And t2Time-of-day high resolution difference image L21(ii) a Get t by the same way2And t3Time-of-day high resolution difference image L32
d. Weighting and summing the two high-resolution difference images at the target moment to obtain a final predicted high-resolution remote sensing image L2
Optionally, in the step a, the low resolution difference image is defined as Mij=Mi-MjSimilarly, a high resolution difference image may be defined. Patch block size p is 7, at t1And t3Randomly extracting 2000 corresponding patch blocks from the time high-low resolution difference image to form a training set;
optionally, in the step b, the high-resolution dictionary D is solved by adopting a K-SVD algorithmlAnd a low resolution dictionary DmDictionary size 256; the dictionary is obtained by independent training in the three bands of near infrared, red and green.
Optionally, in step c, the overlap coefficient v is 4, and the patch block size p is 7;
to simplify the formula, the following vectors and matrices are introduced:
Figure BDA0002489072460000031
Figure BDA0002489072460000032
where α and m represent the concatenation of the sparse representation coefficients and the low resolution patch, D, respectivelymA dictionary matrix is represented, the diagonal elements of which are composed of low resolution dictionaries for three bands. Tables N, R and G below show the near infrared, red and green bands, respectively, and S shows the two-dimensional Laplacian. The expression of the adaptive regularization parameter τ is:
Figure BDA0002489072460000033
wherein
Figure BDA0002489072460000041
Is the mean value of the c band, σcIs the standard deviation of the c band, and gamma is 10. Tau isRGAnd τGNCan also be obtained according to the same definition. Definition of
Figure BDA0002489072460000042
Solving sparse representation coefficient α and sparse representation coefficient α by using ADMM methodk+1The update formula of (2) is:
Figure BDA0002489072460000043
wherein
Figure BDA0002489072460000044
Rho is 0.1, Z0And mu0Are all set to 0. For a vector v, diag (v) represents a diagonal matrix with each diagonal element corresponding to each element of the vector v. For a matrix M, diag (M) represents a vector whose i-th element is the i-th diagonal element of the matrix M;
Zk+1the update formula of (2) is:
Figure BDA0002489072460000045
where λ is set to 0.15. U-sigma V*Is shown (D)mDiag(αk+1)+μk) Singular value decomposition of, σiIs shown (D)mDiag(αk+1)+μk) The ith singular value of (a);
μk+1the update formula of (2) is:
μk+1=μk+DmDiag(αk+1)-Zk+1
alternating iterations αk+1、Zk+1And muk+1Until the convergence condition | | D is satisfiedmDiag(α)-Z||FLess than or equal to or reaching the preset iteration number, the required coefficient representation coefficient α can be obtained*. The patch block of the corresponding high resolution image may be D as in equation llα*Obtaining, high resolution difference image L21Obtained by fusing all patch blocks.
Optionally, in step d, the weighted sum formula is:
L2=W21×(L1+L21)+W32×(L3-L32),
wherein, the weight value W21And W32Are all 0.5.
Compared with the prior art, the invention has the beneficial effects that:
firstly, training by using a high-low resolution difference image patch block at a known moment to obtain a high-low resolution dictionary pair, and then coding a low-resolution difference image at a target moment under the condition of considering structural similarity and dictionary correlation among bands so as to obtain a high-resolution remote sensing image to be predicted; structural similarity between bands is considered through adaptive band constraint. In addition, unlike existing dictionary learning models which only emphasize sparsity, the invention uses the kernel norm as a regularization term, so that both sparsity and correlation of the dictionary can be considered. Therefore, the method can reduce the influence of dictionary pairs with insufficient information and improve the expression capability of the dictionary pairs, effectively makes up for the method in a targeted manner, and the predicted image is closer to a real image.
Drawings
FIG. 1 depicts a schematic of a problem description;
FIG. 2a target time MODIS image;
FIG. 2b shows a Landsat image obtained by the fusion of the present invention;
FIG. 3 is a schematic view of the general flow of the remote sensing image fusion method of the present invention.
Detailed Description
In the dictionary pair training stage, corresponding patch blocks are extracted from the known high-low resolution difference image to form a training set, and then the training set is used for training to obtain the high-low resolution dictionary pair. In the fusion stage, based on the structural similarity between the remote sensing image bands and in combination with the sparsity and the correlation of the dictionary, the low-resolution dictionary is used for coding the low-resolution difference image patch blocks at the target moment, and the sparse representation coefficient of each patch block is obtained. And multiplying the sparse representation coefficient by the high-resolution dictionary to obtain a high-resolution difference image patch block at the target moment. And fusing all the patch blocks to obtain the high-resolution difference image. And finally, performing weighted sum on the two high-resolution difference images to obtain a high-resolution remote sensing image at the target moment in a fusion manner.
The basic idea of the invention is as follows:
firstly, the remote sensing images with high and low resolution at known moments are subjected to difference value processing to obtain difference value images. Extracting patch blocks with the same size from corresponding positions of the high-resolution and low-resolution difference images participating in training, wherein the patch blocks form a training set;
secondly, training to obtain high-resolution dictionary pairs and low-resolution dictionary pairs by utilizing the constructed training set;
and thirdly, extracting a patch block from the target low-resolution difference image, and coding the patch block by using a low-resolution dictionary under the condition of considering the structural similarity and dictionary correlation among the bands to obtain a corresponding sparse representation coefficient. And multiplying the sparse representation coefficient by the high-resolution dictionary to obtain a patch block of the high-resolution difference image at the corresponding moment. Fusing all the high-resolution patch blocks to obtain a high-resolution difference image;
and finally, weighting and summing the two difference images to obtain the final fused high-resolution remote sensing image at the target moment.
The present invention will be further illustrated with reference to the following examples.
The invention provides a remote sensing image fusion algorithm, which comprises the following steps:
the MODIS image is used as a low-resolution image, and the Landsat ETM + image is used as a high-resolution image. Known data as t1、t2And t3(t1<t2<t3) MODIS image M of time1、M2And M3And t is1And t3Landsat ETM + image L at time1And L3Need to predict t2Landsat image L of time2
a. Interpolating the low resolution image to the same size as the high resolution image at t1And t3Patch blocks with the size of p × p are extracted from the time high-low resolution difference image, and the obtained high-low resolution patch blocks form a training set.
Wherein, the patch block size p is 7, at t1And t3And randomly extracting 2000 corresponding patch blocks from the time high-resolution and low-resolution difference image to form a training set.
The problem description is shown in figure 1.
b. And c, training to obtain high and low resolution dictionary pairs according to the training set obtained in the step a.
Wherein a high resolution dictionary D is trainedlAnd a low resolution dictionary DmThe method of (1) is a K-SVD algorithm, and the size of a dictionary is 256. Training near infrared, red and green wave bands respectively to obtain dictionary pairs of corresponding wave bands;
c. from t1And t2Time low resolution difference image M21Dividing the block into p × p patches according to the overlap coefficient v, obtaining corresponding sparse representation coefficient according to the low resolution dictionary coding patch block obtained in the step b, multiplying the sparse representation coefficient by the high resolution dictionary to obtain the corresponding high resolution patch block, and fusing all the high resolution patch blocks to obtain t1And t2Time-of-day high resolution difference image L21. In the same way, t can be obtained2And t3Time-of-day high resolution difference image L32
Wherein, the overlap coefficient v is 4, and the patch block size p is 7;
to simplify the formula, the following vectors and matrices are introduced:
Figure BDA0002489072460000071
Figure BDA0002489072460000072
where α and m represent the concatenation of the sparse representation coefficients and the low resolution patch, D, respectivelymA dictionary matrix is represented, the diagonal elements of which are composed of low resolution dictionaries for three bands. Tables N, R and G below show the near infrared, red and green bands, respectively, and S shows the two-dimensional Laplacian. The expression of the adaptive regularization parameter τ is:
Figure BDA0002489072460000073
wherein
Figure BDA0002489072460000074
Is the mean value of the c band, σcIs the standard deviation of the c band, and gamma is 10. Tau isRGAnd τGNCan also be obtained according to the same definition. Definition of
Figure BDA0002489072460000075
Adopting ADMM method to solve sparse representation coefficient α. sparse representation coefficient αk+1The update formula of (2) is:
Figure BDA0002489072460000083
wherein
Figure BDA0002489072460000081
Rho is 0.1, Z0And mu0Are all set to 0. For a vector v, diag (v) represents a diagonal matrix with each diagonal element corresponding to each element of the vector v. For a matrix M, diag (M) represents a vector whose i-th element is the i-th diagonal element of the matrix M;
Zk+1the update formula of (2) is:
Figure BDA0002489072460000082
where λ is set to 0.15. U Σ V @ (D)mDiag(αk+1)+μk) Singular value decomposition of, σiIs shown (D)mDiag(αk+1)+μk) The ith singular value of (a);
μk+1the update formula of (2) is:
μk+1=μk+DmDiag(αk+1)-Zk+1
alternating iterations αk+1、Zk+1And muk+1Until the convergence condition | | D is satisfiedmDiag(α)-Z||FLess than or equal to or reaching the preset iteration number, the required coefficient representation coefficient α can be obtained*. The patch block of the corresponding high resolution image may be D as in equation llα*Obtaining, high resolution difference image L21Can be obtained by fusing all patch blocks;
d. weighting and summing the two high-resolution difference images at the target moment to obtain a final predicted high-resolution remote sensing image L2
The weighted sum formula is:
L2=W21×(L1+L21)+W32×(L3-L32),
wherein, W21And W32Are all 0.5.
As shown in fig. 2a, the image is the target time MODIS image, and as shown in fig. 2b, the Landsat image is obtained by the fusion of the present invention.

Claims (5)

1. A remote sensing image self-adaptive fusion method is characterized in that:
taking the MODIS image as a low-resolution image and taking the Landsat ETM + image as a high-resolution image; known data as t1、t2And t3(t1<t2<t3) MODIS image M of time1、M2And M3And t is1And t3Landsat ETM + image L at time1And L3Need to predictt2Landsat image L of time2(ii) a The method comprises the following steps:
a. interpolating the low-resolution image to the same size as the high-resolution image, performing difference processing on the low-resolution image and the high-resolution image respectively to obtain a low-resolution difference image and a high-resolution difference image, and performing interpolation at t1And t3Respectively extracting patch blocks with the size of p × p from corresponding positions in the low-resolution difference image and the high-resolution difference image at the moment, wherein the obtained high-resolution patch blocks and the low-resolution patch blocks form a training set;
b. b, training to obtain high and low resolution dictionary pairs according to the training set obtained in the step a;
c. from t1And t2Time low resolution difference image M21Dividing the block into p × p patches according to the overlap coefficient v, coding the patch block according to the low resolution dictionary obtained in step b to obtain corresponding sparse representation coefficient, multiplying the sparse representation coefficient by the high resolution dictionary to obtain corresponding high resolution patch block, and fusing all the high resolution patch blocks to obtain t1And t2Time-of-day high resolution difference image L21(ii) a Get t by the same way2And t3Time-of-day high resolution difference image L32
d. Two high-resolution difference images L of target time21And L32Weighted sum is carried out, namely the high-resolution remote sensing image L obtained by final fusion is obtained2
2. The adaptive fusion method for the remote sensing images according to claim 1, characterized in that: in step a, the low resolution difference image is defined as Mij=Mi-MjSimilarly, a high resolution difference image may be defined. Patch block size p is 7, at t1And t3And randomly extracting 2000 corresponding patch blocks from the time high-resolution and low-resolution difference image to form a training set.
3. The adaptive fusion algorithm for remote sensing images according to claim 1, characterized in that: in the step b, K-Solving high-resolution dictionary D by SVD algorithmlAnd a low resolution dictionary DmDictionary size 256; the dictionary is obtained by independent training in the three bands of near infrared, red and green.
4. The adaptive fusion method for the remote sensing images according to claim 1, characterized in that: in the step c, the overlap coefficient v is 4, and the patch block size p is 7;
to simplify the formula, the following vectors and matrices are introduced:
Figure FDA0002489072450000021
Figure FDA0002489072450000022
where α and m represent the concatenation of the sparse representation coefficients and the low resolution patch, D, respectivelymRepresenting a dictionary matrix, wherein diagonal elements of the dictionary matrix consist of low-resolution dictionaries of three wave bands; n, R and G denote the near infrared, red and green bands, respectively, and S denotes the two-dimensional Laplacian. The expression of the adaptive regularization parameter τ is:
Figure FDA0002489072450000023
wherein
Figure FDA0002489072450000024
Is the mean value of the c band, σcIs the standard deviation of the c band, and gamma is 10. Tau isRGAnd τGNCan also be obtained according to the same definition. Definition of
Figure FDA0002489072450000025
Solving sparse representation coefficient α and sparse representation coefficient α by using ADMM methodk+1The update formula of (2) is:
Figure FDA0002489072450000026
wherein
Figure FDA0002489072450000031
Rho is 0.1, Z0And mu0Are all set to 0. For a vector v, diag (v) represents a diagonal matrix with each diagonal element corresponding to each element of the vector v. For a matrix M, diag (M) represents a vector whose i-th element is the i-th diagonal element of the matrix M;
Zk+1the update formula of (2) is:
Figure FDA0002489072450000032
where λ is set to 0.15; u-sigma V*Is shown (D)mDiag(αk+1)+μk) Singular value decomposition of, σiIs shown (D)mDiag(αk+1)+μk) The ith singular value of (a);
μk+1the update formula of (2) is:
μk+1=μk+DmDiag(αk+1)-Zk+1
alternating iterations αk+1、Zk+1And muk+1Until the convergence condition | | D is satisfiedmDiag(α)-Z||FLess than or equal to or reaching the preset iteration number, the required coefficient representation coefficient α can be obtained*(ii) a The patch block of the corresponding high resolution image may be D as in equation llα*Obtaining, high resolution difference image L21Can be obtained by fusing all patch blocks.
5. The remote sensing image fusion method according to claim 1, characterized in that: in the step d, the weighted sum formula is as follows:
L2=W21×(L1+L21)+W32×(L3-L32),
wherein, the weight value W21And W32Are all 0.5.
CN202010400074.XA 2020-05-13 2020-05-13 Adaptive fusion method for remote sensing image Pending CN111563844A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010400074.XA CN111563844A (en) 2020-05-13 2020-05-13 Adaptive fusion method for remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010400074.XA CN111563844A (en) 2020-05-13 2020-05-13 Adaptive fusion method for remote sensing image

Publications (1)

Publication Number Publication Date
CN111563844A true CN111563844A (en) 2020-08-21

Family

ID=72074643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010400074.XA Pending CN111563844A (en) 2020-05-13 2020-05-13 Adaptive fusion method for remote sensing image

Country Status (1)

Country Link
CN (1) CN111563844A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183595A (en) * 2020-09-18 2021-01-05 中国科学院空天信息创新研究院 Space-time remote sensing image fusion method based on packet compressed sensing
CN117115679A (en) * 2023-10-25 2023-11-24 北京佳格天地科技有限公司 Screening method for space-time fusion remote sensing image pairs

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252703A (en) * 2014-09-04 2014-12-31 吉林大学 Wavelet preprocessing and sparse representation-based satellite remote sensing image super-resolution reconstruction method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252703A (en) * 2014-09-04 2014-12-31 吉林大学 Wavelet preprocessing and sparse representation-based satellite remote sensing image super-resolution reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HANCHI YING等: "Sparsity-Based Spatiotemporal Fusion via Adaptive Multi-Band Constraints" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183595A (en) * 2020-09-18 2021-01-05 中国科学院空天信息创新研究院 Space-time remote sensing image fusion method based on packet compressed sensing
CN112183595B (en) * 2020-09-18 2023-08-04 中国科学院空天信息创新研究院 Space-time remote sensing image fusion method based on grouping compressed sensing
CN117115679A (en) * 2023-10-25 2023-11-24 北京佳格天地科技有限公司 Screening method for space-time fusion remote sensing image pairs

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
Li et al. Hyperspectral image super-resolution using deep convolutional neural network
Mei et al. Spatial and spectral joint super-resolution using convolutional neural network
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
Fu et al. Bidirectional 3D quasi-recurrent neural network for hyperspectral image super-resolution
CN108734661B (en) High-resolution image prediction method for constructing loss function based on image texture information
CN110533620A (en) The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN108304755A (en) The training method and device of neural network model for image procossing
CN113327218B (en) Hyperspectral and full-color image fusion method based on cascade network
CN112991173A (en) Single-frame image super-resolution reconstruction method based on dual-channel feature migration network
CN113177882A (en) Single-frame image super-resolution processing method based on diffusion model
CN109801218B (en) Multispectral remote sensing image Pan-sharpening method based on multilayer coupling convolutional neural network
CN111563844A (en) Adaptive fusion method for remote sensing image
Ge et al. Improved semisupervised unet deep learning model for forest height mapping with satellite sar and optical data
Liu et al. An efficient residual learning neural network for hyperspectral image superresolution
CN116309070A (en) Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment
CN115512192A (en) Multispectral and hyperspectral image fusion method based on cross-scale octave convolution network
CN115601281A (en) Remote sensing image space-time fusion method and system based on deep learning and electronic equipment
Lou et al. Preliminary investigation on single remote sensing image inpainting through a modified GAN
CN114359724B (en) Remote sensing building earthquake damage information extraction method based on color-parameter migration
CN116403103A (en) Remote sensing image analysis and cyanobacteria bloom prediction method based on four-dimensional generation countermeasure network
Li et al. A pseudo-siamese deep convolutional neural network for spatiotemporal satellite image fusion
Singh et al. Low-light image enhancement for UAVs with multi-feature fusion deep neural networks
CN113902646A (en) Remote sensing image pan-sharpening method based on depth layer feature weighted fusion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination