WO2023000505A1 - Two-order lightweight network panchromatic sharpening method combining guided filtering and nsct - Google Patents

Two-order lightweight network panchromatic sharpening method combining guided filtering and nsct Download PDF

Info

Publication number
WO2023000505A1
WO2023000505A1 PCT/CN2021/122464 CN2021122464W WO2023000505A1 WO 2023000505 A1 WO2023000505 A1 WO 2023000505A1 CN 2021122464 W CN2021122464 W CN 2021122464W WO 2023000505 A1 WO2023000505 A1 WO 2023000505A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
mlpan
images
nsct
network
Prior art date
Application number
PCT/CN2021/122464
Other languages
French (fr)
Chinese (zh)
Inventor
黄梦醒
吴园园
李玉春
冯思玲
毋媛媛
吴迪
Original Assignee
海南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 海南大学 filed Critical 海南大学
Publication of WO2023000505A1 publication Critical patent/WO2023000505A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the invention relates to the technical field of remote sensing image processing, in particular to a two-stage lightweight network panchromatic sharpening method combined with guided filtering and NSCT.
  • Remote sensing images are widely used in various industries, such as yield prediction, forestry pest detection, forest natural disaster prediction, geological exploration, national security, land use, environmental change detection, etc., but limited by satellite sensor technology, it is impossible to obtain Images with high spatial resolution and high spectral resolution can only obtain panchromatic images (PAN) with high spatial resolution and low spectral resolution and multispectral images (MS) with low spatial resolution and high spectral resolution.
  • PAN panchromatic images
  • MS multispectral images
  • images with both high spatial resolution and high spectral resolution are often required, and even images with high temporal resolution are required.
  • High spectral resolution images can generally be realized by the following technologies: image enhancement, super-resolution reconstruction, image fusion, etc., among which the mainstream research technology is image fusion technology, which refers to the multi-source image through a certain
  • image fusion technology which refers to the multi-source image through a certain
  • the method generates a higher-quality, more informative image that matches people's visual perception, so that decision makers can make more precise decisions with clearer images.
  • the fusion of MS images and PAN images is one of the hot spots and focuses in the field of remote sensing image processing. Fusion methods can be summarized as component replacement methods, multi-resolution analysis methods, variational methods, and deep learning.
  • Component replacement methods such as IHS, GIHS, AIHS, PCA, Brovey, GS, etc., although these methods can improve spatial resolution, they generally have different degrees of distortion of spectral information; multi-resolution analysis methods such as wavelet (wavelet) transformation, pull Laplacian Pyramid decomposition (Laplacian Pyramid, LP), contourlet (contourlet) transformation, curvelet (curvelet) transformation, non-subsampling contourlet transformation (NSCT) (such as the multi-focus image fusion algorithm based on NSCT whose publication number is CN103632353A ) and so on to reduce spectral distortion to a certain extent, but the spatial resolution is relatively low, and artifacts may also occur; the rapid development of deep learning in the field of computer vision has made various networks begin to be applied in the direction of remote sensing image fusion, such as PNN , PCNN (such as the image fusion method based on gradient domain-guided filtering and improved PCNN whose publication number is CN112184646A
  • the present invention proposes a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT, which preserves spectral information while improving spatial resolution, and has high fusion quality, and the two-stage lightweight network Simple, short training time, prevent overfitting phenomenon.
  • a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT including the following steps:
  • Step S1 obtaining satellite remote sensing images, and preprocessing the MS images and PAN images in the remote sensing images;
  • Step S2 according to the Wald criterion, the preprocessed MS image and PAN image are subjected to resolution reduction processing, and a simulation training set, a simulation test set and a real test set are constructed, wherein the simulation training set and the simulation test set include DUMS images and LPAN images As well as MS images, the real test set includes UMS images and PAN images;
  • Step S3 using AIHS transformation on the DUMS image in the simulation training set to obtain the brightness I component image, and using the I component image to perform histogram equalization processing on the LPAN image to obtain the MLPAN image;
  • Step S4 filtering the MLPAN image by using a guided filter to obtain multi-scale high-frequency component MLPAN Hn and low-frequency component MLPAN Ln ;
  • Step S5 using NSCT to filter the I component image to obtain multi-scale and multi-directional high-frequency direction sub-band images I Hn and low-frequency sub-band images I Ln ;
  • Step S6 constructing a detail extraction network ResCNN according to the DUMS image, the MLPAN image, the high-frequency component MLPAN Hn , the low-frequency component MLPAN Ln , the high-frequency direction sub-band image I Hn and the low-frequency sub-band image I Ln , and obtain injection details In-details;
  • Step S7 inject details In-details and DUMS images as the input of the shallow CNN network, MS images as the output, establish a nonlinear model NLCNN network, fully train the NLCNN network, obtain the optimal nonlinear model, and use the optimal non-linear
  • the parameters of the linear model are frozen, and the pan-sharpened image is obtained using the optimal nonlinear model.
  • the preprocessing in step S1 includes: atmospheric correction and spatial registration.
  • step S2 the specific steps of said step S2 include:
  • Step S21 according to the Wald criterion and the spatial resolution ratio between the panchromatic image and the multispectral image, the MS image and the PAN image are down-sampled using the bicubic interpolation method, and the reduced-resolution LPAN image and the DMS image are obtained;
  • Step S22 Upsampling the DMS image using a bicubic interpolation method according to the Wald criterion, and obtaining a DUMS image;
  • Step S23 using bicubic interpolation method to upsample the MS image according to the Wald criterion, and obtain the UMS image;
  • Step S24 constructing a simulation training set and a simulation test set from DUMS images, LPAN images and MS images, and constructing a real test set from UMS images and PAN images.
  • the AIHS transformation in the step S3 obtains the expression of the I component image as:
  • i is the i-th channel
  • a i is the adaptive coefficient
  • N is the total number of channels.
  • the NSCT in step S5 includes a non-downsampling pyramid filter bank NSPFB and a non-downsampling directional filter bank NSDFB.
  • step S5 the specific steps of said step S5 include:
  • Step S51 using NSPFB to decompose the I component image to obtain the low-frequency sub-band image I Li and the high-frequency sub-band image I Hi ;
  • Step S52 using NSPFB to decompose the low-frequency sub-band image, and obtain the low-frequency sub-band image and the high-frequency sub-band image of the next layer;
  • Step S53 using NSDFB to filter the high-frequency sub-band images of each layer respectively, to obtain the high-frequency direction sub-band images of each layer.
  • step S6 the specific steps of said step S6 include:
  • Step S61 using the DUMS image, the MLPAN image, the high frequency component MLPAN Hn , the low frequency component MLPAN Ln , the high frequency direction subband image I Hn and the low frequency subband image I Ln as the input of the ResCNN network;
  • Step S62 using the details of the difference between the DUMS image and the MS image as a label
  • Step S63 train the ResCNN network, and after minimizing the loss function, freeze the training parameters to obtain the optimal model
  • Step S64 obtaining injection details In-details according to the optimal model.
  • step S7 the specific steps of said step S7 include:
  • Step S71 using the In-details and DUMS images injected as the input of the nonlinear model NLCNN network;
  • Step S72 using the MS image as a label
  • Step S73 train the above network, and after minimizing the loss function, freeze the training parameters to obtain the optimal nonlinear model
  • Step S74 using the optimal nonlinear model to obtain a pan-sharpened image.
  • the present invention provides a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT, effectively combining guided filtering and NSCT, wherein guided filtering is used to extract multi-scale high-frequency components and low-frequency components of MLPAN images, Can maintain edge features; use NSCT to extract multi-scale and multi-directional high-frequency sub-band images and low-frequency sub-band images of I-component images, and then use ResCNN's residual characteristics and nonlinear characteristics to extract richer detail information and construct shallow layers
  • the network is easy to train and prevents overfitting; since the relationship between DUMS images and LPAN images is nonlinear, the nonlinearity of the shallow CNN network is used to inject details and DUMS images for training to obtain the final fusion result.
  • the network designed by the invention is composed of a two-order lightweight network, the network is relatively simple, easy to train, prevents overfitting, has strong generalization ability, and preserves spectral information while improving spatial resolution.
  • Fig. 1 is a flow chart of a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT of the present invention
  • Fig. 2 is a schematic diagram of NSCT filtering of a two-order lightweight network panchromatic sharpening method combining guided filtering and NSCT of the present invention
  • FIG. 3 is a structural schematic diagram of a two-stage lightweight network panchromatic sharpening method ResCNN combining guided filtering and NSCT according to the present invention.
  • a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT provided by the present invention includes the following steps:
  • Step S1 obtaining Landsat-8, Landsat-7, Quickbird, GF-2 satellite remote sensing original images
  • the remote sensing original images include MS images and PAN images
  • Step S2 according to the Wald criterion, the preprocessed MS image and the PAN image are reduced in resolution, and a simulation training set, a simulation test set and a real test set are constructed, wherein the simulation training set and the simulation test set include down-sampling
  • the real test set includes upsampled multispectral image UMS image and panchromatic image PAN image
  • the specific steps include:
  • Step S21 according to the Wald criterion and the spatial resolution ratio between the panchromatic image and the multispectral image, the MS image and the PAN image are down-sampled using the bicubic interpolation method, and the reduced-resolution LPAN image and the DMS image are obtained;
  • Step S22 Upsampling the DMS image using a bicubic interpolation method according to the Wald criterion, and obtaining a DUMS image, wherein the size of the DUMS image is the same as that of the LPAN image;
  • Step S23 Upsampling the MS image using the bicubic interpolation method according to the Wald criterion, and obtaining a UMS image, the size of the UMS image is the same as the size of the PAN image;
  • Step S24 constructing a simulation training set and a simulation test set from DUMS images, LPAN images and MS images, and constructing a real test set from UMS images and PAN images.
  • the present invention uses Landsat-8 satellite's DUMS image, LPAN image, MS image as simulation training set, in order to verify the performance of the present invention better, use the DUMS of Landsat-8, Landsat-7, Quickbird and GF-2 four satellites Images, LPAN images, and MS images are used as simulation test sets, and MS images and PAN images are used as real test sets.
  • Step S3 using AIHS transformation on the DUMS image in the simulation training set to obtain the brightness I component image, and using the I component image to carry out histogram equalization processing on the LPAN image to obtain the MLPAN image, wherein the expression of the AIHS transformation to obtain the I component image is:
  • i is the i-th channel
  • a i is the adaptive coefficient
  • N is the total number of channels.
  • Step S4 Filter the MLPAN image by using a guided filter to obtain multi-scale high-frequency component MLPAN Hn and low-frequency component MLPAN Ln , the specific steps are:
  • the input image of guide filter is MLPAN image
  • guide image is I component image
  • MLPAN i-1 is the output image of the i-1th filter
  • MLPAN Li MLPAN i
  • MLPAN Hi MLPAN Li ⁇ 1 ⁇ MLPAN Li
  • n high frequency components MLPAN Hn and n low frequency components MLPAN Ln are obtained after n times of filtering.
  • Step S5 use NSCT to filter the I component image to obtain multi-scale and multi-directional high-frequency direction sub-band image I Hn and low-frequency sub-band image I Ln , wherein NSCT includes non-downsampling pyramid filter bank NSPFB and non-downsampling direction Filter bank NSDFB, as shown in Figure 2, the low-pass filter of NSPFB includes low-pass decomposition filter and low-pass reconstruction filter ⁇ D 0 (X), D 1 (X) ⁇ , the high-pass filter of NSDFB includes High-pass decomposition filter and high-pass reconstruction filter ⁇ G 0 (X), G 1 (X) ⁇ , NSPFB satisfies the Bezout identity 1D polynomial function:
  • the sector filter of the NSDFB includes a sector decomposition filter and a sector reconstruction filter ⁇ C 0 (X), C 1 (X) ⁇
  • the checkerboard filter of the NSDFB comprises a checkerboard decomposition filter and a checkerboard reconstruction filter ⁇ Q 0 (X),Q 1 (X) ⁇
  • NSDFB satisfies the Bezout identity 1D polynomial function:
  • step S5 The specific steps of step S5 include:
  • Step S51 using NSPFB to decompose the I component image to obtain the low-frequency sub-band image I Li and the high-frequency sub-band image I Hi ;
  • Step S52 using NSPFB to decompose the low-frequency sub-band image, and obtain the low-frequency sub-band image and the high-frequency sub-band image of the next layer;
  • Step S53 using NSDFB to filter the high-frequency sub-band images of each layer respectively, to obtain the high-frequency direction sub-band images of each layer.
  • Step S6 construct a detail extraction network ResCNN according to the DUMS image, MLPAN image, high-frequency component MLPAN Hn , low-frequency component MLPAN Ln , high-frequency direction sub-band image I Hn and low-frequency sub-band image I Ln , and obtain injection details In-details, Specific steps include:
  • Step S61 with DUMS image, MLPAN image, high-frequency component MLPAN Hn , low-frequency component MLPAN Ln , high-frequency direction sub-band image I Hn and low-frequency sub-band image I Ln as the input of ResCNN network, as shown in Figure 3,
  • ResCNN network Consists of 2 layers of convolution each layer is first normalized BN operation, then use ReLu function for nonlinear activation, and then perform convolution operation, the size of the convolution kernel is 3 ⁇ 3, and the convolution size of the direct connection part is 1 ⁇ 1;
  • Step S62 using the details of the difference between the DUMS image and the MS image as a label
  • Step S63 train the ResCNN network, and after minimizing the loss function, freeze the training parameters to obtain the optimal model
  • Step S64 according to the optimal model to obtain richer detailed features, that is, to inject details In-details.
  • Step S7 use the injected details In-details and DUMS images as the input of the shallow CNN network, and the MS image as the output, establish a nonlinear model NLCNN network, fully train the NLCNN network, obtain the optimal nonlinear model, and use the optimal non-linear
  • the parameters of the linear model are frozen, and the pan-sharpened image is obtained using the optimal nonlinear model.
  • the NLCNN network is composed of a single-layer CNN, which performs convolution operation first, then BN processing, and finally uses the ReLu activation function for activation, where a 1 ⁇ 1 ⁇ n convolution kernel is used, and n is the output MS image
  • the number of channels, 3 channels are used in this embodiment, the convolution kernel is 1 ⁇ 1 ⁇ 3, and 1 ⁇ 1 is the size of the convolution kernel.
  • the NLCNN network convolutional layer is expressed as:
  • MS max(0,W i *(DUMS,InD)+B i );
  • W i is the convolution kernel
  • InD is the injected detail
  • Bi is the bias.
  • step S7 The specific steps of step S7 include:
  • Step S71 using the In-details and DUMS images injected as the input of the nonlinear model NLCNN network;
  • Step S72 using the MS image as a label
  • Step S73 train the above network, and after minimizing the loss function, freeze the training parameters to obtain the optimal nonlinear model
  • Step S74 using the optimal nonlinear model to obtain a pan-sharpened image.
  • the present invention provides an embodiment to discuss the effectiveness, using the remote sensing image acquired by the Landsat-8 satellite sensor, wherein the spatial resolution of the multispectral image is 30 meters, and the pixel size is 600 ⁇ 600; the corresponding resolution of the panchromatic image is 15 meters , the pixel size is 1200 ⁇ 1200, according to the Wald criterion, the panchromatic image with a spatial resolution of 15 meters and the multispectral image with a spatial resolution of 30 meters are down-sampled by a factor of 2 to obtain a panchromatic image of 30 meters and a multispectral image of 60 meters.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

Provided in the present invention is a two-order lightweight network panchromatic sharpening method combining guided filtering and NSCT. The advantage of retaining better edge and detail information by guided filtering and the advantage of multi-scale multi-directional decomposition of NSCT are combined with a CNN, so as to construct a two-order lightweight network model to fuse an MS image and a PAN image, wherein by means of guided filtering, the panchromatic image MLPAN, which has been subjected to histogram matching, is filtered, so as to obtain a multi-scale high-frequency component and low-frequency component; by means of NSCT, an I-component image extracted from the MS image is filtered, so as to obtain a multi-scale multi-directional high-frequency directional sub-band image and low-frequency sub-band image; a detail extraction network ResCNN is constructed by using the advantage of a residual module, so as to extract in-details; and finally, a nonlinear model NLCNN is constructed by means of taking the in-details and a DUMS image as inputs, and the NLCNN is fully trained to obtain an optimal model. By means of the method, spectral information is retained while spatial resolution can be increased to a greater extent, the network structure is simple, the training time is reduced, an over-fitting phenomenon is prevented from occuring, and the fusion performance is improved.

Description

一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法A Two-Stage Lightweight Network Panchromatic Sharpening Method Combining Guided Filtering and NSCT 技术领域technical field
本发明涉及遥感图像处理技术领域,特别涉及一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法。The invention relates to the technical field of remote sensing image processing, in particular to a two-stage lightweight network panchromatic sharpening method combined with guided filtering and NSCT.
背景技术Background technique
遥感图像广泛的应用在各行各业,像产量预测、林业病虫害检测、森林自然灾害预测、地质探测、国家安防、土地利用、环境变化检测等等,但是受到卫星传感器技术的限制,不能获得同时具有高空间分辨率和高光谱分辨率的图像,只能够获得高空间分辨率低光谱分辨率的全色图像(PAN)和低空间分辨率高光谱分辨率的多光谱图像(MS),然而在实际应用中往往需要既具有高空间分辨率又具有高光谱分辨率的图像,甚至要求具有高时间分辨率的图像,目前常用的方式是利用PAN和MS图像的冗余和互补信息,获得高空间分辨率高光谱分辨率的图像(HSHM),一般可以通过下列技术实现:图像增强、超分辨率重建、图像融合等等,其中主流的研究技术是图像融合技术,它是指将多源图像通过一定方法生成一个质量更高、信息更丰富的图像,符合人们的视觉感知、以便决策人员可以通过更清晰的图像做出更精确的决策。Remote sensing images are widely used in various industries, such as yield prediction, forestry pest detection, forest natural disaster prediction, geological exploration, national security, land use, environmental change detection, etc., but limited by satellite sensor technology, it is impossible to obtain Images with high spatial resolution and high spectral resolution can only obtain panchromatic images (PAN) with high spatial resolution and low spectral resolution and multispectral images (MS) with low spatial resolution and high spectral resolution. However, in practice In applications, images with both high spatial resolution and high spectral resolution are often required, and even images with high temporal resolution are required. At present, the commonly used method is to use the redundant and complementary information of PAN and MS images to obtain high spatial resolution High spectral resolution images (HSHM) can generally be realized by the following technologies: image enhancement, super-resolution reconstruction, image fusion, etc., among which the mainstream research technology is image fusion technology, which refers to the multi-source image through a certain The method generates a higher-quality, more informative image that matches people's visual perception, so that decision makers can make more precise decisions with clearer images.
MS图像和PAN图像的融合又称为全色锐化,是遥感图像处理领域研究的热门、重点之一,融合方法可以归纳为成分替换方法、多分辨率分析法、变分方法、深度学习。成分替换方法,像IHS、GIHS、AIHS、PCA、Brovey、GS等,虽然这些方法能够提高空间分辨率,但是普遍存在光谱信息不同程度的失真;多分辨率分析法像小波(wavelet)变换、拉普拉斯金字塔分解(Laplacian Pyramid,LP)、轮廓波(contourlet)变换、曲波(curvelet)变换、非下采样轮廓波变换(NSCT)(如公开号为CN103632353A的基于NSCT的多聚焦图像融合算法)等虽然在一定程度上减少了光谱失真,但是空间分辨率比较低,还可能出现伪影问题;深度学习在计算机视觉领域的快速发展,使得各种网络开始应用在遥感图像融合方向,像PNN、PCNN(如公开号为CN112184646A的基于梯度域导向滤 波和改进PCNN的图像融合方法)、DRPNN、PanNet、PanGAN等网络的提出用于全色锐化取得了一定的效果,但是还是会存在光谱失真、空间分辨率低、融合质量不高、过拟合、训练时间过长的问题。The fusion of MS images and PAN images, also known as panchromatic sharpening, is one of the hot spots and focuses in the field of remote sensing image processing. Fusion methods can be summarized as component replacement methods, multi-resolution analysis methods, variational methods, and deep learning. Component replacement methods, such as IHS, GIHS, AIHS, PCA, Brovey, GS, etc., although these methods can improve spatial resolution, they generally have different degrees of distortion of spectral information; multi-resolution analysis methods such as wavelet (wavelet) transformation, pull Laplacian Pyramid decomposition (Laplacian Pyramid, LP), contourlet (contourlet) transformation, curvelet (curvelet) transformation, non-subsampling contourlet transformation (NSCT) (such as the multi-focus image fusion algorithm based on NSCT whose publication number is CN103632353A ) and so on to reduce spectral distortion to a certain extent, but the spatial resolution is relatively low, and artifacts may also occur; the rapid development of deep learning in the field of computer vision has made various networks begin to be applied in the direction of remote sensing image fusion, such as PNN , PCNN (such as the image fusion method based on gradient domain-guided filtering and improved PCNN whose publication number is CN112184646A), DRPNN, PanNet, PanGAN and other networks have achieved certain effects for panchromatic sharpening, but there will still be spectral distortion , Low spatial resolution, low fusion quality, overfitting, and long training time.
发明内容Contents of the invention
鉴以此,本发明提出一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,在提高空间分辨率的同时保留光谱信息,融合质量较高,并且两阶轻量型网络简单,训练时间短,防止过拟合现象。In view of this, the present invention proposes a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT, which preserves spectral information while improving spatial resolution, and has high fusion quality, and the two-stage lightweight network Simple, short training time, prevent overfitting phenomenon.
本发明是通过以下技术方案实现的:The present invention is achieved through the following technical solutions:
一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,包括以下步骤:A two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT, including the following steps:
步骤S1、获取卫星遥感图像,对遥感图像中的MS图像和PAN图像进行预处理;Step S1, obtaining satellite remote sensing images, and preprocessing the MS images and PAN images in the remote sensing images;
步骤S2、根据Wald准则对预处理后的MS图像和PAN图像进行降分辨率处理,并构建仿真训练集、仿真测试集以及真实测试集,其中仿真训练集和仿真测试集包括DUMS图像、LPAN图像以及MS图像,真实测试集包括UMS图像和PAN图像;Step S2, according to the Wald criterion, the preprocessed MS image and PAN image are subjected to resolution reduction processing, and a simulation training set, a simulation test set and a real test set are constructed, wherein the simulation training set and the simulation test set include DUMS images and LPAN images As well as MS images, the real test set includes UMS images and PAN images;
步骤S3、对仿真训练集中的DUMS图像使用AIHS变换得到亮度I分量图像,并使用I分量图像对LPAN图像进行直方图均衡化处理,得到MLPAN图像;Step S3, using AIHS transformation on the DUMS image in the simulation training set to obtain the brightness I component image, and using the I component image to perform histogram equalization processing on the LPAN image to obtain the MLPAN image;
步骤S4、采用引导滤波器对MLPAN图像进行滤波,得到多尺度的高频分量MLPAN Hn以及低频分量MLPAN LnStep S4, filtering the MLPAN image by using a guided filter to obtain multi-scale high-frequency component MLPAN Hn and low-frequency component MLPAN Ln ;
步骤S5、采用NSCT对I分量图像进行滤波,得到多尺度多方向的高频方向子带图像I Hn以及低频子带图像I LnStep S5, using NSCT to filter the I component image to obtain multi-scale and multi-directional high-frequency direction sub-band images I Hn and low-frequency sub-band images I Ln ;
步骤S6、根据DUMS图像、MLPAN图像、高频分量MLPAN Hn、低频分量MLPAN Ln、高频方向子带图像I Hn以及低频子带图像I Ln构建细节提取网络ResCNN,并获得注入细节In-details; Step S6, constructing a detail extraction network ResCNN according to the DUMS image, the MLPAN image, the high-frequency component MLPAN Hn , the low-frequency component MLPAN Ln , the high-frequency direction sub-band image I Hn and the low-frequency sub-band image I Ln , and obtain injection details In-details;
步骤S7、将注入细节In-details和DUMS图像作为浅层CNN网络的输入,MS图像作为输出,建立非线性模型NLCNN网络,对NLCNN网络进行充分训练, 获得最优非线性模型,对最优非线性模型的参数进行冻结,使用最优非线性模型获得全色锐化图像。Step S7, inject details In-details and DUMS images as the input of the shallow CNN network, MS images as the output, establish a nonlinear model NLCNN network, fully train the NLCNN network, obtain the optimal nonlinear model, and use the optimal non-linear The parameters of the linear model are frozen, and the pan-sharpened image is obtained using the optimal nonlinear model.
优选的,所述步骤S1中的预处理包括:大气校正和空间配准。Preferably, the preprocessing in step S1 includes: atmospheric correction and spatial registration.
优选的,所述步骤S2的具体步骤包括:Preferably, the specific steps of said step S2 include:
步骤S21、根据Wald准则及全色图像和多光谱图像之间的空间分辨率之比对MS图像和PAN图像使用双三次插值方法进行下采样,并获得降分辨率的LPAN图像以及DMS图像;Step S21, according to the Wald criterion and the spatial resolution ratio between the panchromatic image and the multispectral image, the MS image and the PAN image are down-sampled using the bicubic interpolation method, and the reduced-resolution LPAN image and the DMS image are obtained;
步骤S22、根据Wald准则对DMS图像使用双三次插值方法进行上采样,并获得DUMS图像;Step S22. Upsampling the DMS image using a bicubic interpolation method according to the Wald criterion, and obtaining a DUMS image;
步骤S23、根据Wald准则对MS图像使用双三次插值方法进行上采样,并获得UMS图像;Step S23, using bicubic interpolation method to upsample the MS image according to the Wald criterion, and obtain the UMS image;
步骤S24、由DUMS图像、LPAN图像以及MS图像构建仿真训练集和仿真测试集,由UMS图像和PAN图像构建真实测试集。Step S24, constructing a simulation training set and a simulation test set from DUMS images, LPAN images and MS images, and constructing a real test set from UMS images and PAN images.
优选的,所述步骤S3中的AIHS变换获取I分量图像的表达式为:Preferably, the AIHS transformation in the step S3 obtains the expression of the I component image as:
Figure PCTCN2021122464-appb-000001
Figure PCTCN2021122464-appb-000001
其中i为第i个通道,a i为自适应系数,N为通道的总数。 Where i is the i-th channel, a i is the adaptive coefficient, and N is the total number of channels.
优选的,所述步骤S4的具体步骤为:使用引导滤波器对MLPAN图像进行滤波,引导滤波器的输入图像为MLPAN图像,引导图像为I分量图像,进行滤波后得到低频分量MLPAN i=GF(MLPAN i-1,I),其中GF为引导滤波器,MLPAN i-1是第i-1次滤波的输出图像,当i=1时,即是MLPAN图像,则第i个低频分量MLPAN Li=MLPAN i,第i个高频分量MLPAN Hi=MLPAN Li-1-MLPAN Li,在进行n次滤波后得到n个高频分量MLPAN Hn以及n个低频分量MLPAN LnPreferably, the specific steps of said step S4 are: use a guide filter to filter the MLPAN image, the input image of the guide filter is an MLPAN image, and the guide image is an I component image, and the low frequency component MLPAN i =GF( MLPAN i-1 , I), wherein GF is a guide filter, and MLPAN i-1 is the output image of the i-1 filtering, when i=1, it is an MLPAN image, then the i low-frequency component MLPAN Li = MLPAN i , the i-th high-frequency component MLPAN Hi =MLPAN Li-1 −MLPAN Li , n high-frequency components MLPAN Hn and n low-frequency components MLPAN Ln are obtained after n times of filtering.
优选的,所述步骤S5的NSCT包括非下采样金字塔滤波器组NSPFB和非下采样方向滤波器组NSDFB。Preferably, the NSCT in step S5 includes a non-downsampling pyramid filter bank NSPFB and a non-downsampling directional filter bank NSDFB.
优选的,所述步骤S5的具体步骤包括:Preferably, the specific steps of said step S5 include:
步骤S51、采用NSPFB对I分量图像进行分解,获得低频子带图像I Li和高频子带图像I HiStep S51, using NSPFB to decompose the I component image to obtain the low-frequency sub-band image I Li and the high-frequency sub-band image I Hi ;
步骤S52、采用NSPFB对低频子带图像进行分解,并获得下一层的低频子带图像和高频子带图像;Step S52, using NSPFB to decompose the low-frequency sub-band image, and obtain the low-frequency sub-band image and the high-frequency sub-band image of the next layer;
步骤S53、采用NSDFB分别对每一层的高频子带图像进行滤波,获得每一层的高频方向子带图像。Step S53, using NSDFB to filter the high-frequency sub-band images of each layer respectively, to obtain the high-frequency direction sub-band images of each layer.
优选的,所述步骤S6的具体步骤包括:Preferably, the specific steps of said step S6 include:
步骤S61、以DUMS图像、MLPAN图像、高频分量MLPAN Hn、低频分量MLPAN Ln、高频方向子带图像I Hn以及低频子带图像I Ln作为ResCNN网络的输入; Step S61, using the DUMS image, the MLPAN image, the high frequency component MLPAN Hn , the low frequency component MLPAN Ln , the high frequency direction subband image I Hn and the low frequency subband image I Ln as the input of the ResCNN network;
步骤S62、将DUMS图像和MS图像之间相差的细节作为标签;Step S62, using the details of the difference between the DUMS image and the MS image as a label;
步骤S63、对ResCNN网络进行训练,并使损失函数最小后,冻结训练参数,得到最优模型;Step S63, train the ResCNN network, and after minimizing the loss function, freeze the training parameters to obtain the optimal model;
步骤S64、根据最优模型获得注入细节In-details。Step S64, obtaining injection details In-details according to the optimal model.
优选的,所述步骤S7的具体步骤包括:Preferably, the specific steps of said step S7 include:
步骤S71、将注入细节In-details、DUMS图像作为非线性模型NLCNN网络的输入;Step S71, using the In-details and DUMS images injected as the input of the nonlinear model NLCNN network;
步骤S72、将MS图像作为标签;Step S72, using the MS image as a label;
步骤S73、对上述网络进行训练,并使损失函数最小后,冻结训练参数,得到最优非线性模型;Step S73, train the above network, and after minimizing the loss function, freeze the training parameters to obtain the optimal nonlinear model;
步骤S74、使用最优非线性模型获得全色锐化图像。Step S74, using the optimal nonlinear model to obtain a pan-sharpened image.
与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:
本发明提供了一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,将引导滤波和NSCT有效地结合,其中使用引导滤波提取MLPAN图像的多尺度高频分量以及低频分量,能够保持边缘特征;使用NSCT提取I分量图像多尺度多方向的高频方向子带图像以及低频子带图像,再使用ResCNN的残差特性及非线性特性提取更丰富的细节信息,构建浅层的网络,便于训练,防止出现过拟合的现象;由于DUMS图像和LPAN图像之间是非线性关系,利用浅层的CNN网络 的非线性将注入细节和DUMS图像进行训练,得到最终的融合结果。本发明所设计的网络由两阶轻量型网络构成,网络比较简单,容易训练,防止过拟合,泛化能力强,在提高空间分辨率的同时保留光谱信息。The present invention provides a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT, effectively combining guided filtering and NSCT, wherein guided filtering is used to extract multi-scale high-frequency components and low-frequency components of MLPAN images, Can maintain edge features; use NSCT to extract multi-scale and multi-directional high-frequency sub-band images and low-frequency sub-band images of I-component images, and then use ResCNN's residual characteristics and nonlinear characteristics to extract richer detail information and construct shallow layers The network is easy to train and prevents overfitting; since the relationship between DUMS images and LPAN images is nonlinear, the nonlinearity of the shallow CNN network is used to inject details and DUMS images for training to obtain the final fusion result. The network designed by the invention is composed of a two-order lightweight network, the network is relatively simple, easy to train, prevents overfitting, has strong generalization ability, and preserves spectral information while improving spatial resolution.
附图说明Description of drawings
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的优选实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following will briefly introduce the drawings that need to be used in the description of the embodiments. Obviously, the drawings in the following description are only preferred embodiments of the present invention. For those skilled in the art, other drawings can also be obtained based on these drawings without any creative effort.
图1为本发明的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法的流程图;Fig. 1 is a flow chart of a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT of the present invention;
图2为本发明的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法的NSCT滤波示意图;Fig. 2 is a schematic diagram of NSCT filtering of a two-order lightweight network panchromatic sharpening method combining guided filtering and NSCT of the present invention;
图3为本发明的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法ResCNN结构示意图。FIG. 3 is a structural schematic diagram of a two-stage lightweight network panchromatic sharpening method ResCNN combining guided filtering and NSCT according to the present invention.
具体实施方式detailed description
为了更好理解本发明技术内容,下面提供一个具体实施例,并结合附图对本发明做进一步的说明。In order to better understand the technical content of the present invention, a specific embodiment is provided below, and the present invention is further described in conjunction with the accompanying drawings.
参见图1,本发明提供的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,包括以下步骤:Referring to Fig. 1, a two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT provided by the present invention includes the following steps:
步骤S1、获取Landsat-8、Landsat-7、Quickbird、GF-2卫星遥感原始图像,遥感原始图像中包含MS图像和PAN图像,对遥感图像中的MS图像和PAN图像进行预处理,预处理包括大气校正和空间配准。Step S1, obtaining Landsat-8, Landsat-7, Quickbird, GF-2 satellite remote sensing original images, the remote sensing original images include MS images and PAN images, and preprocessing the MS images and PAN images in the remote sensing images, the preprocessing includes Atmospheric correction and spatial registration.
步骤S2、根据Wald准则对预处理后的MS图像和PAN图像进行降分辨率处理,并构建仿真训练集、仿真测试集以及真实测试集,其中仿真训练集和仿真测试集包括降分辨率上采样的多光谱图像DUMS、降分辨率全色图像LPAN以及多光谱图像MS,真实测试集包括上采样的多光谱图像UMS图像和全色图像PAN图像,具体步骤包括:Step S2, according to the Wald criterion, the preprocessed MS image and the PAN image are reduced in resolution, and a simulation training set, a simulation test set and a real test set are constructed, wherein the simulation training set and the simulation test set include down-sampling The multispectral image DUMS, reduced resolution panchromatic image LPAN and multispectral image MS, the real test set includes upsampled multispectral image UMS image and panchromatic image PAN image, the specific steps include:
步骤S21、根据Wald准则及全色图像和多光谱图像之间的空间分辨率之比对MS图像和PAN图像使用双三次插值方法进行下采样,并获得降分辨率的LPAN图像以及DMS图像;Step S21, according to the Wald criterion and the spatial resolution ratio between the panchromatic image and the multispectral image, the MS image and the PAN image are down-sampled using the bicubic interpolation method, and the reduced-resolution LPAN image and the DMS image are obtained;
步骤S22、根据Wald准则对DMS图像使用双三次插值方法进行上采样,并获得DUMS图像,其中DUMS图像的尺寸和LPAN图像的尺寸相同;Step S22: Upsampling the DMS image using a bicubic interpolation method according to the Wald criterion, and obtaining a DUMS image, wherein the size of the DUMS image is the same as that of the LPAN image;
步骤S23、根据Wald准则对MS图像使用双三次插值方法进行上采样,并获得UMS图像,UMS图像的尺寸和PAN图像的尺寸相同;Step S23: Upsampling the MS image using the bicubic interpolation method according to the Wald criterion, and obtaining a UMS image, the size of the UMS image is the same as the size of the PAN image;
步骤S24、由DUMS图像、LPAN图像以及MS图像构建仿真训练集和仿真测试集,由UMS图像和PAN图像构建真实测试集。Step S24, constructing a simulation training set and a simulation test set from DUMS images, LPAN images and MS images, and constructing a real test set from UMS images and PAN images.
本发明使用Landsat-8卫星的DUMS图像、LPAN图像、MS图像作为仿真训练集,为了更好地验证本发明的性能,使用Landsat-8、Landsat-7、Quickbird以及GF-2四个卫星的DUMS图像、LPAN图像、MS图像作为仿真测试集,MS图像和PAN图像作为真实测试集。The present invention uses Landsat-8 satellite's DUMS image, LPAN image, MS image as simulation training set, in order to verify the performance of the present invention better, use the DUMS of Landsat-8, Landsat-7, Quickbird and GF-2 four satellites Images, LPAN images, and MS images are used as simulation test sets, and MS images and PAN images are used as real test sets.
步骤S3、对仿真训练集中的DUMS图像使用AIHS变换得到亮度I分量图像,并使用I分量图像对LPAN图像进行直方图均衡化处理,得到MLPAN图像,其中AIHS变换获取I分量图像的表达式为:Step S3, using AIHS transformation on the DUMS image in the simulation training set to obtain the brightness I component image, and using the I component image to carry out histogram equalization processing on the LPAN image to obtain the MLPAN image, wherein the expression of the AIHS transformation to obtain the I component image is:
Figure PCTCN2021122464-appb-000002
Figure PCTCN2021122464-appb-000002
其中i为第i个通道,a i为自适应系数,N为通道的总数。 Where i is the i-th channel, a i is the adaptive coefficient, and N is the total number of channels.
步骤S4、采用引导滤波器对MLPAN图像进行滤波,得到多尺度的高频分量MLPAN Hn以及低频分量MLPAN Ln,具体步骤为: Step S4: Filter the MLPAN image by using a guided filter to obtain multi-scale high-frequency component MLPAN Hn and low-frequency component MLPAN Ln , the specific steps are:
使用引导滤波器对MLPAN图像进行滤波,引导滤波器的输入图像为MLPAN图像,引导图像为I分量图像,进行滤波后得到低频分量MLPAN i=GF(MLPAN i-1,I),其中GF为引导滤波器,MLPAN i-1是第i-1次滤波的输出图像,当i=1时,MLPAN i-1为MLPAN,则第i个低频分量MLPAN Li=MLPAN i,第i个高频分量MLPAN Hi=MLPAN Li-1-MLPAN Li,在进行n次滤波后得到n个高频分量MLPAN Hn以及n个低频分量MLPAN LnUse guide filter to filter MLPAN image, the input image of guide filter is MLPAN image, guide image is I component image, after filtering, obtain low frequency component MLPAN i =GF(MLPAN i-1 , I), wherein GF is guide Filter, MLPAN i-1 is the output image of the i-1th filter, when i=1, MLPAN i-1 is MLPAN, then the i-th low-frequency component MLPAN Li =MLPAN i , the i-th high-frequency component MLPAN Hi =MLPAN Li−1 −MLPAN Li , n high frequency components MLPAN Hn and n low frequency components MLPAN Ln are obtained after n times of filtering.
步骤S5、采用NSCT对I分量图像进行滤波,得到多尺度多方向的高频方向子带图像I Hn以及低频子带图像I Ln,其中NSCT包括非下采样金字塔滤波器组NSPFB和非下采样方向滤波器组NSDFB,如图2所示,NSPFB的低通滤波器包括低通分解滤波器和低通重构滤波器{D 0(X),D 1(X)},NSDFB的高通滤波器包括高通分解滤波器和高通重构滤波器{G 0(X),G 1(X)},NSPFB满足Bezout恒等式1D多项式函数: Step S5, use NSCT to filter the I component image to obtain multi-scale and multi-directional high-frequency direction sub-band image I Hn and low-frequency sub-band image I Ln , wherein NSCT includes non-downsampling pyramid filter bank NSPFB and non-downsampling direction Filter bank NSDFB, as shown in Figure 2, the low-pass filter of NSPFB includes low-pass decomposition filter and low-pass reconstruction filter {D 0 (X), D 1 (X)}, the high-pass filter of NSDFB includes High-pass decomposition filter and high-pass reconstruction filter {G 0 (X), G 1 (X)}, NSPFB satisfies the Bezout identity 1D polynomial function:
Figure PCTCN2021122464-appb-000003
Figure PCTCN2021122464-appb-000003
所述NSDFB的扇形滤波器包括扇形分解滤波器和扇形重构滤波器{C 0(X),C 1(X)},NSDFB的棋盘滤波器包括棋盘分解滤波器和棋盘重构滤波器{Q 0(X),Q 1(X)},NSDFB满足Bezout恒等式1D多项式函数: The sector filter of the NSDFB includes a sector decomposition filter and a sector reconstruction filter {C 0 (X), C 1 (X)}, and the checkerboard filter of the NSDFB comprises a checkerboard decomposition filter and a checkerboard reconstruction filter {Q 0 (X),Q 1 (X)}, NSDFB satisfies the Bezout identity 1D polynomial function:
Figure PCTCN2021122464-appb-000004
Figure PCTCN2021122464-appb-000004
步骤S5的具体步骤包括:The specific steps of step S5 include:
步骤S51、采用NSPFB对I分量图像进行分解,获得低频子带图像I Li和高频子带图像I HiStep S51, using NSPFB to decompose the I component image to obtain the low-frequency sub-band image I Li and the high-frequency sub-band image I Hi ;
步骤S52、采用NSPFB对低频子带图像进行分解,并获得下一层的低频子带图像和高频子带图像;Step S52, using NSPFB to decompose the low-frequency sub-band image, and obtain the low-frequency sub-band image and the high-frequency sub-band image of the next layer;
步骤S53、采用NSDFB分别对每一层的高频子带图像进行滤波,获得每一层的高频方向子带图像。Step S53, using NSDFB to filter the high-frequency sub-band images of each layer respectively, to obtain the high-frequency direction sub-band images of each layer.
步骤S6、根据DUMS图像、MLPAN图像、高频分量MLPAN Hn、低频分量MLPAN Ln、高频方向子带图像I Hn以及低频子带图像I Ln构建细节提取网络ResCNN,并获得注入细节In-details,具体步骤包括: Step S6, construct a detail extraction network ResCNN according to the DUMS image, MLPAN image, high-frequency component MLPAN Hn , low-frequency component MLPAN Ln , high-frequency direction sub-band image I Hn and low-frequency sub-band image I Ln , and obtain injection details In-details, Specific steps include:
步骤S61、以DUMS图像、MLPAN图像、高频分量MLPAN Hn、低频分量MLPAN Ln、高频方向子带图像I Hn以及低频子带图像I Ln作为ResCNN网络的输入,如图3所示,ResCNN网络由2层卷积构成,每一层都是先归一化BN操作,再使用ReLu函数进行非线性激活,再进行卷积操作,卷积核大小为3×3,直连部分的卷积大小为1×1; Step S61, with DUMS image, MLPAN image, high-frequency component MLPAN Hn , low-frequency component MLPAN Ln , high-frequency direction sub-band image I Hn and low-frequency sub-band image I Ln as the input of ResCNN network, as shown in Figure 3, ResCNN network Consists of 2 layers of convolution, each layer is first normalized BN operation, then use ReLu function for nonlinear activation, and then perform convolution operation, the size of the convolution kernel is 3×3, and the convolution size of the direct connection part is 1×1;
步骤S62、将DUMS图像和MS图像之间相差的细节作为标签;Step S62, using the details of the difference between the DUMS image and the MS image as a label;
步骤S63、对ResCNN网络进行训练,并使损失函数最小后,冻结训练参数,得到最优模型;Step S63, train the ResCNN network, and after minimizing the loss function, freeze the training parameters to obtain the optimal model;
步骤S64、根据最优模型进而得到更丰富的细节特征,即注入细节In-details。Step S64, according to the optimal model to obtain richer detailed features, that is, to inject details In-details.
步骤S7、将注入细节In-details和DUMS图像作为浅层CNN网络的输入,MS图像作为输出,建立非线性模型NLCNN网络,对NLCNN网络进行充分训练,获得最优非线性模型,对最优非线性模型的参数进行冻结,使用最优非线性模型获得全色锐化图像。在本实施例中,NLCNN网络由单层CNN组成,先进行卷积操作,再进行BN处理,最后使用ReLu激活函数进行激活,其中使用1×1×n卷积核,n是输出MS图像的通道数,本实施例中使用3个通道,其卷积核为1×1×3,1×1是卷积核的尺寸。Step S7, use the injected details In-details and DUMS images as the input of the shallow CNN network, and the MS image as the output, establish a nonlinear model NLCNN network, fully train the NLCNN network, obtain the optimal nonlinear model, and use the optimal non-linear The parameters of the linear model are frozen, and the pan-sharpened image is obtained using the optimal nonlinear model. In this embodiment, the NLCNN network is composed of a single-layer CNN, which performs convolution operation first, then BN processing, and finally uses the ReLu activation function for activation, where a 1×1×n convolution kernel is used, and n is the output MS image The number of channels, 3 channels are used in this embodiment, the convolution kernel is 1×1×3, and 1×1 is the size of the convolution kernel.
其中NLCNN网络卷积层表示为:The NLCNN network convolutional layer is expressed as:
MS=max(0,W i*(DUMS,InD)+B i); MS=max(0,W i *(DUMS,InD)+B i );
其中W i为卷积核,InD为注入细节,B i为偏差。 Where W i is the convolution kernel, InD is the injected detail, and Bi is the bias.
步骤S7的具体步骤包括:The specific steps of step S7 include:
步骤S71、将注入细节In-details、DUMS图像作为非线性模型NLCNN网络的输入;Step S71, using the In-details and DUMS images injected as the input of the nonlinear model NLCNN network;
步骤S72、将MS图像作为标签;Step S72, using the MS image as a label;
步骤S73、对上述网络进行训练,并使损失函数最小后,冻结训练参数,得到最优非线性模型;Step S73, train the above network, and after minimizing the loss function, freeze the training parameters to obtain the optimal nonlinear model;
步骤S74、使用最优非线性模型获得全色锐化图像。Step S74, using the optimal nonlinear model to obtain a pan-sharpened image.
本发明提供一个实施例来论述有效性,采用Landsat-8卫星传感器获取的遥感图像,其中多光谱图像空间分辨率是30米,像素大小是600×600;对应的全色图像分辨率是15米,像素大小是1200×1200,按照Wald准则对空间分辨率15米全色图像和空间分辨率30米多光谱图像以2倍因子进行下采样操作获得30米全色和60米多光谱仿真图像,分别使用7种方法(Indusion、NSCT、SFIM、MTF_GLP、PNN、DRPNN、PanNet)与本发明一种结合引导滤波和NSCT的 两阶轻量型网络全色锐化方法进行对比,无论是降分辨率还是全分辨率下的实验结果均可以表明本发明提出的方法的融合效果更优。The present invention provides an embodiment to discuss the effectiveness, using the remote sensing image acquired by the Landsat-8 satellite sensor, wherein the spatial resolution of the multispectral image is 30 meters, and the pixel size is 600×600; the corresponding resolution of the panchromatic image is 15 meters , the pixel size is 1200×1200, according to the Wald criterion, the panchromatic image with a spatial resolution of 15 meters and the multispectral image with a spatial resolution of 30 meters are down-sampled by a factor of 2 to obtain a panchromatic image of 30 meters and a multispectral image of 60 meters. Use 7 methods (Indusion, NSCT, SFIM, MTF_GLP, PNN, DRPNN, PanNet) to compare with a two-stage lightweight network panchromatic sharpening method combined with guided filtering and NSCT of the present invention, whether it is to reduce the resolution The experimental results also at full resolution can show that the fusion effect of the method proposed by the present invention is better.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the scope of the present invention. within the scope of protection.

Claims (9)

  1. 一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,包括以下步骤:A two-order lightweight network panchromatic sharpening method combining guided filtering and NSCT, characterized in that it comprises the following steps:
    步骤S1、获取卫星遥感图像,对遥感图像中的MS图像和PAN图像进行预处理;Step S1, obtaining satellite remote sensing images, and preprocessing the MS images and PAN images in the remote sensing images;
    步骤S2、根据Wald准则对预处理后的MS图像和PAN图像进行降分辨率处理,并构建仿真训练集、仿真测试集以及真实测试集,其中仿真训练集和仿真测试集包括DUMS图像、LPAN图像以及MS图像,真实测试集包括UMS图像和PAN图像;Step S2, according to the Wald criterion, the preprocessed MS image and PAN image are subjected to resolution reduction processing, and a simulation training set, a simulation test set and a real test set are constructed, wherein the simulation training set and the simulation test set include DUMS images and LPAN images As well as MS images, the real test set includes UMS images and PAN images;
    步骤S3、对仿真训练集中的DUMS图像使用AIHS变换得到亮度I分量图像,并使用I分量图像对LPAN图像进行直方图均衡化处理,得到MLPAN图像;Step S3, using AIHS transformation on the DUMS image in the simulation training set to obtain the brightness I component image, and using the I component image to perform histogram equalization processing on the LPAN image to obtain the MLPAN image;
    步骤S4、采用引导滤波器对MLPAN图像进行滤波,得到多尺度的高频分量MLPAN Hn以及低频分量MLPAN LnStep S4, filtering the MLPAN image by using a guided filter to obtain multi-scale high-frequency component MLPAN Hn and low-frequency component MLPAN Ln ;
    步骤S5、采用NSCT对I分量图像进行滤波,得到多尺度多方向的高频方向子带图像I Hn以及低频子带图像I LnStep S5, using NSCT to filter the I component image to obtain multi-scale and multi-directional high-frequency direction sub-band images I Hn and low-frequency sub-band images I Ln ;
    步骤S6、根据DUMS图像、MLPAN图像、高频分量MLPAN Hn、低频分量MLPAN Ln、高频方向子带图像I Hn以及低频子带图像I Ln构建细节提取网络ResCNN,并获得注入细节In-details; Step S6, constructing a detail extraction network ResCNN according to the DUMS image, the MLPAN image, the high-frequency component MLPAN Hn , the low-frequency component MLPAN Ln , the high-frequency direction sub-band image I Hn and the low-frequency sub-band image I Ln , and obtain injection details In-details;
    步骤S7、将注入细节In-details和DUMS图像作为浅层CNN网络的输入,MS图像作为输出,建立非线性模型NLCNN网络,对NLCNN网络进行充分训练,获得最优非线性模型,对最优非线性模型的参数进行冻结,使用最优非线性模型获得全色锐化图像。Step S7, use the injected details In-details and DUMS images as the input of the shallow CNN network, and the MS image as the output, establish a nonlinear model NLCNN network, fully train the NLCNN network, obtain the optimal nonlinear model, and use the optimal non-linear The parameters of the linear model are frozen, and the pan-sharpened image is obtained using the optimal nonlinear model.
  2. 根据权利要求1所述的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,所述步骤S1中的预处理包括:大气校正和空间配准。A two-stage lightweight network panchromatic sharpening method combined with guided filtering and NSCT according to claim 1, wherein the preprocessing in step S1 includes: atmospheric correction and spatial registration.
  3. 根据权利要求1所述的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,所述步骤S2的具体步骤包括:A two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT according to claim 1, wherein the specific steps of the step S2 include:
    步骤S21、根据Wald准则及全色图像和多光谱图像之间的空间分辨率之比对MS图像和PAN图像使用双三次插值方法进行下采样,并获得降分辨率的LPAN图像以及DMS图像;Step S21, according to the Wald criterion and the spatial resolution ratio between the panchromatic image and the multispectral image, the MS image and the PAN image are down-sampled using the bicubic interpolation method, and the reduced-resolution LPAN image and the DMS image are obtained;
    步骤S22、根据Wald准则对DMS图像使用双三次插值方法进行上采样,并获得DUMS图像;Step S22. Upsampling the DMS image using a bicubic interpolation method according to the Wald criterion, and obtaining a DUMS image;
    步骤S23、根据Wald准则对MS图像使用双三次插值方法进行上采样,并获得UMS图像;Step S23, using bicubic interpolation method to upsample the MS image according to the Wald criterion, and obtain the UMS image;
    步骤S24、由DUMS图像、LPAN图像以及MS图像构建仿真训练集和仿真测试集,由UMS图像和PAN图像构建真实测试集。Step S24, constructing a simulation training set and a simulation test set from DUMS images, LPAN images and MS images, and constructing a real test set from UMS images and PAN images.
  4. 根据权利要求1所述的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,所述步骤S3中的AIHS变换获取I分量图像的表达式为:A two-stage lightweight network panchromatic sharpening method combined with guided filtering and NSCT according to claim 1, wherein the expression of the AIHS transformation in the step S3 to obtain the I component image is:
    Figure PCTCN2021122464-appb-100001
    Figure PCTCN2021122464-appb-100001
    其中i为第i个通道,a i为自适应系数,N为通道的总数。 Where i is the i-th channel, a i is the adaptive coefficient, and N is the total number of channels.
  5. 根据权利要求1所述的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,所述步骤S4的具体步骤为:使用引导滤波器对MLPAN图像进行滤波,引导滤波器的输入图像为MLPAN图像,引导图像为I分量图像,进行滤波后得到低频分量MLPAN i=GF(MLPAN i-1,I),其中GF为引导滤波器,MLPAN i-1是第i-1次滤波的输出图像,当i=1时,即是MLPAN图像,则第i个低频分量MLPAN Li=MLPAN i,第i个高频分量MLPAN Hi=MLPAN Li-1-MLPAN Li,在进行n次滤波后得到n个高频分量MLPAN Hn以及n个低频分量MLPAN LnA two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT according to claim 1, wherein the specific step of step S4 is: using a guided filter to filter the MLPAN image, The input image of the guide filter is an MLPAN image, and the guide image is an I component image. After filtering, the low-frequency component MLPAN i =GF(MLPAN i-1 , I) is obtained, wherein GF is a guide filter, and MLPAN i-1 is the i-th The output image of -1 filter, when i=1, is the MLPAN image, then the i-th low-frequency component MLPAN Li =MLPAN i , the i-th high-frequency component MLPAN Hi =MLPAN Li-1- MLPAN Li , in the process After n times of filtering, n high frequency components MLPAN Hn and n low frequency components MLPAN Ln are obtained.
  6. 根据权利要求1所述的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,所述步骤S5的NSCT包括非下采样金字塔滤波器组NSPFB和非下采样方向滤波器组NSDFB。A two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT according to claim 1, wherein the NSCT in step S5 includes a non-subsampling pyramid filter bank NSPFB and a non-subsampling Directional Filter Bank NSDFB.
  7. 根据权利要求6所述的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,所述步骤S5的具体步骤包括:A two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT according to claim 6, wherein the specific steps of step S5 include:
    步骤S51、采用NSPFB对I分量图像进行分解,获得低频子带图像I Li和高频子带图像I HiStep S51, using NSPFB to decompose the I component image to obtain the low-frequency sub-band image I Li and the high-frequency sub-band image I Hi ;
    步骤S52、采用NSPFB对低频子带图像进行分解,并获得下一层的低频子带图像和高频子带图像;Step S52, using NSPFB to decompose the low-frequency sub-band image, and obtain the low-frequency sub-band image and the high-frequency sub-band image of the next layer;
    步骤S53、采用NSDFB分别对每一层的高频子带图像进行滤波,获得每一层的高频方向子带图像。Step S53, using NSDFB to filter the high-frequency sub-band images of each layer respectively, to obtain the high-frequency direction sub-band images of each layer.
  8. 根据权利要求1所述的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,所述步骤S6的具体步骤包括:A two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT according to claim 1, wherein the specific steps of step S6 include:
    步骤S61、以DUMS图像、MLPAN图像、高频分量MLPAN Hn、低频分量MLPAN Ln、高频方向子带图像I Hn以及低频子带图像I Ln作为ResCNN网络的输入; Step S61, using the DUMS image, the MLPAN image, the high frequency component MLPAN Hn , the low frequency component MLPAN Ln , the high frequency direction subband image I Hn and the low frequency subband image I Ln as the input of the ResCNN network;
    步骤S62、将DUMS图像和MS图像之间相差的细节作为标签;Step S62, using the details of the difference between the DUMS image and the MS image as a label;
    步骤S63、对ResCNN网络进行训练,并使损失函数最小后,冻结训练参数,得到最优模型;Step S63, train the ResCNN network, and after minimizing the loss function, freeze the training parameters to obtain the optimal model;
    步骤S64、根据最优模型获得注入细节In-details。Step S64, obtaining injection details In-details according to the optimal model.
  9. 根据权利要求1所述的一种结合引导滤波和NSCT的两阶轻量型网络全色锐化方法,其特征在于,所述步骤S7的具体步骤包括:A two-stage lightweight network panchromatic sharpening method combining guided filtering and NSCT according to claim 1, wherein the specific steps of the step S7 include:
    步骤S71、将注入细节In-details、DUMS图像作为非线性模型NLCNN网络的输入;Step S71, using the In-details and DUMS images injected as the input of the nonlinear model NLCNN network;
    步骤S72、将MS图像作为标签;Step S72, using the MS image as a label;
    步骤S73、对上述网络进行训练,并使损失函数最小后,冻结训练参数,得到最优非线性模型;Step S73, train the above network, and after minimizing the loss function, freeze the training parameters to obtain the optimal nonlinear model;
    步骤S74、使用最优非线性模型获得全色锐化图像。Step S74, using the optimal nonlinear model to obtain a pan-sharpened image.
PCT/CN2021/122464 2021-07-19 2021-09-30 Two-order lightweight network panchromatic sharpening method combining guided filtering and nsct WO2023000505A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110814955.0 2021-07-19
CN202110814955.0A CN113643197B (en) 2021-07-19 2021-07-19 Two-order lightweight network full-color sharpening method combining guided filtering and NSCT

Publications (1)

Publication Number Publication Date
WO2023000505A1 true WO2023000505A1 (en) 2023-01-26

Family

ID=78417698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/122464 WO2023000505A1 (en) 2021-07-19 2021-09-30 Two-order lightweight network panchromatic sharpening method combining guided filtering and nsct

Country Status (2)

Country Link
CN (1) CN113643197B (en)
WO (1) WO2023000505A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861083A (en) * 2023-03-03 2023-03-28 吉林大学 Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564644B (en) * 2022-01-10 2023-07-25 荣耀终端有限公司 Image data processing method, related device and computer storage medium
CN114663301B (en) * 2022-03-05 2024-03-08 西北工业大学 Convolutional neural network panchromatic sharpening method based on wavelet layer
CN117132468B (en) * 2023-07-11 2024-05-24 汕头大学 Curvelet coefficient prediction-based super-resolution reconstruction method for precise measurement image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660038A (en) * 2019-09-09 2020-01-07 山东工商学院 Multispectral image and panchromatic image fusion method based on generation countermeasure network
US20200265597A1 (en) * 2018-03-14 2020-08-20 Dalian University Of Technology Method for estimating high-quality depth maps based on depth prediction and enhancement subnetworks

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318527A (en) * 2014-10-21 2015-01-28 浙江工业大学 Method for de-noising medical ultrasonic image based on wavelet transformation and guide filter
CN107610049B (en) * 2017-08-21 2021-01-05 华侨大学 Image super-resolution method based on sparse regularization technology and weighting-guided filtering
CN110428387B (en) * 2018-11-16 2022-03-04 西安电子科技大学 Hyperspectral and full-color image fusion method based on deep learning and matrix decomposition
CN110930339A (en) * 2019-12-05 2020-03-27 福州大学 Aviation and remote sensing image defogging method based on NSCT domain
CN113129247B (en) * 2021-04-21 2023-04-07 重庆邮电大学 Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200265597A1 (en) * 2018-03-14 2020-08-20 Dalian University Of Technology Method for estimating high-quality depth maps based on depth prediction and enhancement subnetworks
CN110660038A (en) * 2019-09-09 2020-01-07 山东工商学院 Multispectral image and panchromatic image fusion method based on generation countermeasure network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HU JIANWEN; HU PEI; KANG XUDONG; ZHANG HUI; FAN SHAOSHENG: "Pan-Sharpening via Multiscale Dynamic Convolutional Neural Network", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, IEEE, USA, vol. 59, no. 3, 16 July 2020 (2020-07-16), USA, pages 2231 - 2244, XP011838586, ISSN: 0196-2892, DOI: 10.1109/TGRS.2020.3007884 *
IMANI, M.: "Band Dependent Spatial Details Injection Based on Collaborative Representation for Pansharpening", IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, vol. 11, no. 12, 31 December 2018 (2018-12-31), XP011695716, DOI: 10.1109/JSTARS.2018.2851791 *
YANG YONG, LU HANGYUAN; HUANG SHUYING; TU WEI; LI LUYI: "Remote Sensing Image Fusion Method Based on Adaptive Injection Model", JOURNAL OF BEIJING UNIVERSITY OF AERONAUTICS AND ASTRONAUTICS, GAI KAN BIAN WEI HUI, BEIJING, CN, vol. 45, no. 12, 31 December 2019 (2019-12-31), CN , pages 2351 - 2363, XP093026726, ISSN: 1001-5965, DOI: 10.13700/j.bh.1001-5965.2019.0372 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861083A (en) * 2023-03-03 2023-03-28 吉林大学 Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features

Also Published As

Publication number Publication date
CN113643197B (en) 2023-06-20
CN113643197A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
WO2023000505A1 (en) Two-order lightweight network panchromatic sharpening method combining guided filtering and nsct
Zhang et al. Pan-sharpening using an efficient bidirectional pyramid network
Zhong et al. Remote sensing image fusion with convolutional neural network
CN109741256B (en) Image super-resolution reconstruction method based on sparse representation and deep learning
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
Luo et al. Pansharpening via unsupervised convolutional neural networks
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
Li et al. DDLPS: Detail-based deep Laplacian pansharpening for hyperspectral imagery
CN107240066A (en) Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
Yang et al. SAR-to-optical image translation based on improved CGAN
CN113793289B (en) Multispectral image and full-color image fuzzy fusion method based on CNN and NSCT
Kwan et al. Pansharpening of Mastcam images
CN112184604B (en) Color image enhancement method based on image fusion
Xiao et al. Image Fusion
Pan et al. FDPPGAN: remote sensing image fusion based on deep perceptual patchGAN
Liu et al. Research on super-resolution reconstruction of remote sensing images: A comprehensive review
Guo et al. MDFN: Mask deep fusion network for visible and infrared image fusion without reference ground-truth
Villar-Corrales et al. Deep learning architectural designs for super-resolution of noisy images
Gong et al. Learning deep resonant prior for hyperspectral image super-resolution
Jian et al. Pansharpening using a guided image filter based on dual-scale detail extraction
Sulaiman et al. A robust pan-sharpening scheme for improving resolution of satellite images in the domain of the nonsubsampled shearlet transform
Zhang et al. Enhanced visual perception for underwater images based on multistage generative adversarial network
CN108537765A (en) A kind of spaceborne PAN and multi-spectral image interfusion method
CN110400270B (en) License plate defogging method utilizing image decomposition and multiple correction fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21950743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE