CN111563866B - Multisource remote sensing image fusion method - Google Patents

Multisource remote sensing image fusion method Download PDF

Info

Publication number
CN111563866B
CN111563866B CN202010378705.2A CN202010378705A CN111563866B CN 111563866 B CN111563866 B CN 111563866B CN 202010378705 A CN202010378705 A CN 202010378705A CN 111563866 B CN111563866 B CN 111563866B
Authority
CN
China
Prior art keywords
image
component
multispectral
fusion
panchromatic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010378705.2A
Other languages
Chinese (zh)
Other versions
CN111563866A (en
Inventor
李晓玲
聂祥飞
黄海波
张月
冯丽源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Three Gorges University
Original Assignee
Chongqing Three Gorges University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Three Gorges University filed Critical Chongqing Three Gorges University
Priority to CN202010378705.2A priority Critical patent/CN111563866B/en
Publication of CN111563866A publication Critical patent/CN111563866A/en
Application granted granted Critical
Publication of CN111563866B publication Critical patent/CN111563866B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a multi-source remote sensing image fusion method, which comprises the following steps: firstly, utilizing IHS conversion to obtain a brightness I component, a chromaticity H component and a saturation S component of the multi-spectrum image after up-sampling; filtering the component I by adopting guide filtering, and constructing self-adaptive fractional order differentiation for enhancing edge detail information in the full-color image; respectively carrying out wavelet transformation on the filtered multispectral image I component and the enhanced panchromatic image to obtain a high-frequency component and a low-frequency component of the multispectral image I component and the enhanced panchromatic image, wherein the high-frequency component adopts an absolute value maximization principle, and the low-frequency component adopts a weighted average principle; and taking the result obtained by wavelet inverse transformation as a new I component, and finally obtaining a fusion image by utilizing IHS inverse transformation. The method combines the guided filtering and the fractional differential to fuse the full-color image and the multispectral image, effectively inhibits the spectrum distortion phenomenon of the fusion result, and reduces the loss of space detail information.

Description

一种多源遥感图像融合方法A Multi-source Remote Sensing Image Fusion Method

技术领域technical field

本发明涉及图像处理技术领域,具体地说是一种多源遥感图像融合方法。The invention relates to the technical field of image processing, in particular to a multi-source remote sensing image fusion method.

背景技术Background technique

多源遥感图像融合是指将来源于不同传感器的同一场景的两幅或多幅遥感图像进行信息互补叠加,得到信息更加精确和完善的综合图像的图像处理技术。图像融合不仅是遥感探测数据处理的重要组成部分,而且在环境检测、城市规划、军事侦察等领域也有着广泛的应用空间。近年来,随着信号处理技术的不断发展,研究人员对图像融合方法展开了大量研究。Multi-source remote sensing image fusion refers to the image processing technology that superimposes two or more remote sensing images of the same scene from different sensors to obtain a comprehensive image with more accurate and complete information. Image fusion is not only an important part of remote sensing data processing, but also has a wide range of applications in environmental detection, urban planning, military reconnaissance and other fields. In recent years, with the continuous development of signal processing technology, researchers have conducted a lot of research on image fusion methods.

目前,多源遥感图像融合主要分为三个层次:像素层融合、特征层融合和决策层融合。相比于特征层融合和决策层融合,像素层融合在精准性和时效性方面具有更佳的性能。针对像素层多源遥感图像融合方法主要包括三类:基于成分替代的图像融合、基于多分辨率分析的图像融合及基于模式的图像融合。基于成分替代的图像融合方法具有复杂度低、易于实现的特性,且融合结果的空间细节信息保存良好,但处理过程中涉及到空间变换操作,因而常常出现光谱畸变;基于多分辨率分析的图像融合方法不易出现光谱失真现象,但融合过程中常出现振铃现象,进而使得融合结果出现空间特征丢失;基于模型的图像融合方法不易出现光谱失真和空间特征的丢失,然而融合过程中所涉及的方法复杂度较高。现阶段,虽然图像融合方法不断涌现,但依旧面临很多问题和困难,主要体现在如下方面:1)融合图像的空间信息易丢失;2)融合图像的光谱易出现失真。At present, multi-source remote sensing image fusion is mainly divided into three levels: pixel level fusion, feature level fusion and decision level fusion. Compared with feature level fusion and decision level fusion, pixel level fusion has better performance in terms of accuracy and timeliness. Multi-source remote sensing image fusion methods for pixel level mainly include three categories: image fusion based on component replacement, image fusion based on multi-resolution analysis and image fusion based on pattern. The image fusion method based on component substitution has the characteristics of low complexity and easy implementation, and the spatial details of the fusion result are well preserved, but the processing involves spatial transformation operations, so spectral distortion often occurs; images based on multi-resolution analysis The fusion method is not prone to spectral distortion, but the ringing phenomenon often occurs during the fusion process, which leads to the loss of spatial features in the fusion result; the model-based image fusion method is not prone to spectral distortion and loss of spatial features, but the methods involved in the fusion process High complexity. At this stage, although image fusion methods continue to emerge, they still face many problems and difficulties, which are mainly reflected in the following aspects: 1) The spatial information of the fused image is easily lost; 2) The spectrum of the fused image is prone to distortion.

因此,解决多源遥感图像融合过程中存在的空间信息丢失和光谱失真问题,实已成为本领域技术人员的重要课题之一。Therefore, solving the problems of spatial information loss and spectral distortion in the process of multi-source remote sensing image fusion has become one of the important tasks for those skilled in the art.

发明内容Contents of the invention

本发明要解决的技术问题是:提供一种多源遥感图像融合方法,以有效抑制融合结果的光谱失真现象,并减少空间细节信息的丢失。The technical problem to be solved by the present invention is to provide a multi-source remote sensing image fusion method to effectively suppress the spectral distortion phenomenon of fusion results and reduce the loss of spatial detail information.

本发明为实现上述目的所采用的技术方案包括以下步骤:The technical solution adopted by the present invention for realizing the above object comprises the following steps:

步骤一:获取同一地物目标的多光谱图像和全色图像,将多光谱图像运用双三次插值法进行上采样,使多光谱图像的尺寸与全色图像保持一致;Step 1: Obtain the multispectral image and the panchromatic image of the same ground object, and use the bicubic interpolation method to upsample the multispectral image so that the size of the multispectral image is consistent with that of the panchromatic image;

步骤二:采用IHS变换对多光谱图像进行色彩空间变换,提取多光谱图像的亮度I分量以备下一步处理,保留多光谱图像的色度H分量与饱和度S分量用作后续的IHS反变换;Step 2: Use IHS transformation to transform the color space of the multispectral image, extract the brightness I component of the multispectral image for further processing, and retain the chroma H component and saturation S component of the multispectral image for subsequent IHS inverse transformation ;

步骤三:构造自适应分数阶微分用于增强全色图像的边缘细节和保留地物轮廓信息,同时,采用引导滤波对多光谱图像的亮度I分量进行滤波处理;Step 3: Construct an adaptive fractional differential to enhance the edge details of the panchromatic image and preserve the contour information of the object, and at the same time, use guided filtering to filter the brightness I component of the multispectral image;

步骤四:将分数阶微分处理后的全色图像和引导滤波处理后的多光谱图像亮度I分量分别进行小波变换,得到变换后的高、低频分量,其中,高频分量采用绝对值取大原则,低频分量采用加权平均原则;Step 4: Perform wavelet transformation on the panchromatic image after fractional differential processing and the brightness I component of the multispectral image after guided filtering processing to obtain the transformed high and low frequency components, where the high frequency component adopts the principle of taking the largest absolute value , the low-frequency component adopts the weighted average principle;

步骤五:通过小波重构得到小波反变换后的结果图像;Step 5: Obtain the result image after wavelet inverse transformation through wavelet reconstruction;

步骤六:将小波反变换得到的结果作为Inew分量,并与H分量、S分量进行IHS反变换,得到融合图像。Step 6: Use the result obtained by inverse wavelet transform as the I new component, and perform IHS inverse transform with the H component and S component to obtain a fused image.

本发明具有以下优点及有益效果:The present invention has the following advantages and beneficial effects:

1.相比于现有技术中对上采样后多光谱图像的亮度分量直接进行处理,易产生块效应的问题;本发明引入了具有结构转移特性和保边平滑特性的引导滤波,实现了对图像融合过程中块效应的抑制。同时,也增强了多光谱图像I分量的空间纹理信息及局部细节信息,从而辅助提高融合图像的视觉效果。1. Compared with the direct processing of the luminance component of the upsampled multispectral image in the prior art, the problem of block effect is easy to occur; the present invention introduces guided filtering with structure transfer characteristics and edge-preserving smoothing characteristics, and realizes the Suppression of blocking artifacts during image fusion. At the same time, the spatial texture information and local detail information of the I component of the multispectral image are also enhanced, thereby helping to improve the visual effect of the fused image.

2.相比于现有技术中对亮度分量与全色图像直接进行直方图匹配,导致灰度级减少、部分细节丢失的问题;本发明将分数阶微分引入到图像融合中,特别是结合图像统计特征对分数阶微分的阶次进行改进,避免了人为地设定固定阶次,实现了具有自适应性的分数阶微分。不仅可以有效保留图像作为基础层的平坦部分,还使得图像的边缘细节部分得到增强。2. Compared with the direct histogram matching of the luminance component and the full-color image in the prior art, the gray level is reduced and some details are lost; the present invention introduces fractional differentiation into image fusion, especially in combination with image Statistical features improve the order of fractional differentiation, avoid artificially setting a fixed order, and realize adaptive fractional differentiation. It can not only effectively preserve the flat part of the image as the base layer, but also enhance the edge details of the image.

附图说明Description of drawings

图1为本发明方法的图像融合流程图。Fig. 1 is the image fusion flowchart of the method of the present invention.

图2为本发明方法在实验中使用的测试图像,以及不同方法在测试图像上的融合效果对比图。Fig. 2 is a test image used in the experiment of the method of the present invention, and a comparison diagram of fusion effects of different methods on the test image.

其中,图2(a)为全色图像,图2(b)为多光谱图像,图2(c)为IHS变换法得到的融合图像,图2(d)为Brovey法得到的融合图像,图2(e)为PCA法得到的融合图像,图2(f)为DWT法得到的融合图像,图2(g)为ATWT-M3法得到的融合图像,图2(h)为ATWT法得到的融合图像,图2(i)为AWLP法得到的融合图像,图2(j)为GS法得到的融合图像,图2(k)为HPF法得到的融合图像,图2(l)为MTF-GLP法得到的融合图像,图2(m)为本发明方法得到的融合图像。Among them, Fig. 2(a) is a panchromatic image, Fig. 2(b) is a multispectral image, Fig. 2(c) is a fused image obtained by the IHS transform method, Fig. 2(d) is a fused image obtained by the Brovey method, Fig. 2(e) is the fused image obtained by the PCA method, Fig. 2(f) is the fused image obtained by the DWT method, Fig. 2(g) is the fused image obtained by the ATWT-M3 method, and Fig. 2(h) is the fused image obtained by the ATWT method Fusion image, Fig. 2(i) is the fusion image obtained by AWLP method, Fig. 2(j) is the fusion image obtained by GS method, Fig. 2(k) is the fusion image obtained by HPF method, Fig. 2(l) is the fusion image obtained by MTF- The fused image obtained by the GLP method, Fig. 2(m) is the fused image obtained by the method of the present invention.

具体实施方式Detailed ways

下面结合附图及实施例对本发明作进一步的详细说明。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

如图1所示,一种多源遥感图像融合方法,包括以下步骤:As shown in Figure 1, a multi-source remote sensing image fusion method includes the following steps:

步骤一:获取同一地物目标的多光谱图像和全色图像,将多光谱图像运用双三次插值法进行上采样,使多光谱图像的尺寸与全色图像保持一致。Step 1: Obtain the multispectral image and the panchromatic image of the same ground object, and use the bicubic interpolation method to upsample the multispectral image so that the size of the multispectral image is consistent with that of the panchromatic image.

步骤二:采用IHS变换对多光谱图像进行色彩空间变换,提取多光谱图像的亮度I分量以备下一步处理,保留多光谱图像的色度H分量与饱和度S分量用作后续的IHS反变换,I分量、H分量及S分量的计算公式分别为:Step 2: Use IHS transformation to transform the color space of the multispectral image, extract the brightness I component of the multispectral image for further processing, and retain the chroma H component and saturation S component of the multispectral image for subsequent IHS inverse transformation , the calculation formulas of I component, H component and S component are respectively:

Figure BDA0002481223390000031
Figure BDA0002481223390000031

H=tan-1[s1/s2] (2)H=tan -1 [s 1 /s 2 ] (2)

Figure BDA0002481223390000032
Figure BDA0002481223390000032

其中,R、G、B分别是多光谱图像的红、绿、蓝三个波段。Among them, R, G, and B are the red, green, and blue bands of the multispectral image, respectively.

步骤三:构造自适应分数阶微分用于增强全色图像的边缘细节和保留地物轮廓信息,具体方式为:Step 3: Construct an adaptive fractional differential to enhance the edge details of the panchromatic image and preserve the outline information of the object. The specific method is:

假设全色图像f(i,j)的大小是M×N,那么空间频率计算公式为:Assuming that the size of the panchromatic image f(i,j) is M×N, then the spatial frequency calculation formula is:

Figure BDA0002481223390000041
Figure BDA0002481223390000041

Figure BDA0002481223390000042
Figure BDA0002481223390000042

Figure BDA0002481223390000043
Figure BDA0002481223390000043

其中,RF与CF分别表示全色图像f(i,j)的行频率和列频率。图像的空间频率数值越大,表明图像的空间信息越丰富,图像的层次感越强。此外,全色图像f(i,j)平均梯度的计算公式为:Among them, RF and CF represent the row frequency and column frequency of the panchromatic image f(i,j) respectively. The larger the value of the spatial frequency of the image, the richer the spatial information of the image and the stronger the layering of the image. In addition, the calculation formula of the average gradient of the panchromatic image f(i,j) is:

Figure BDA0002481223390000044
Figure BDA0002481223390000044

平均梯度的数值越大,则图像的边缘、纹理等细节特征越突出,清晰度也越高。接着,采用反余切非线性归一化函数对全色图像的空间频率和平均梯度进行归一化,即:The larger the value of the average gradient, the more prominent the details such as the edge and texture of the image, and the higher the definition. Next, the spatial frequency and average gradient of the panchromatic image are normalized using the arccotangent nonlinear normalization function, namely:

Figure BDA0002481223390000045
Figure BDA0002481223390000045

Figure BDA0002481223390000046
Figure BDA0002481223390000046

考虑到图像的空间频率和平均梯度的数值大小对微分阶次的影响同等重要,因此对二者进行如下平均加权处理:Considering that the spatial frequency of the image and the numerical value of the average gradient are equally important to the differential order, the following average weighting process is performed on the two:

Figure BDA0002481223390000047
Figure BDA0002481223390000047

由于Tanh函数在实数范围内为单调递增函数,且呈现非线性增长趋势,此特性符合微分阶次随图像统计信息的变化规律,因而采用Tanh函数对阶次函数进行如下构造。Since the Tanh function is a monotonically increasing function in the range of real numbers and presents a nonlinear growth trend, this characteristic conforms to the law of the variation of the differential order with image statistics, so the Tanh function is used to construct the order function as follows.

Figure BDA0002481223390000051
Figure BDA0002481223390000051

当分数阶微分阶次v∈[0.5,0.7]时,能够更加突出图像的纹理细节,并充分保留图像轮廓信息。因此,本发明对f(Y)展开了修正处理,其中,β和α分别取0.5和0.7,进而得到自适应分数阶微分阶次的计算函数v。When the fractional differential order v∈[0.5,0.7], the texture details of the image can be highlighted and the image contour information can be fully preserved. Therefore, the present invention carries out a correction process on f(Y), wherein β and α are respectively set to 0.5 and 0.7, and then the calculation function v of the adaptive fractional differential order is obtained.

Figure BDA0002481223390000052
Figure BDA0002481223390000052

同时,采用引导滤波对多光谱图像的亮度I分量进行滤波处理,具体方式为:At the same time, the guided filtering is used to filter the brightness I component of the multispectral image, and the specific method is as follows:

首先,设置引导滤波的半径r=7,正则化参数ε=10-6。计算引导滤波的线性系数a(k,l)和b(k,l)的数值分别为:First, set the radius of the guided filter r=7, and the regularization parameter ε=10 −6 . The values of the linear coefficients a (k,l) and b (k,l) for calculating the guided filtering are:

Figure BDA0002481223390000053
Figure BDA0002481223390000053

Figure BDA0002481223390000054
Figure BDA0002481223390000054

式中,|ω|表示以像素点(k,l)为中心、r为半径的矩形局部窗ω(k,l)内的像素个数,

Figure BDA0002481223390000055
和μ(k,l)分别是局部窗ω(k,l)所含像素的方差和均值,
Figure BDA0002481223390000056
代表局部窗ω(k,l)所含多光谱图像I分量的像素均值,ε是正则化参数。考虑到在引导滤波的处理过程中,一个像素点(i,j)可能同时被多个局部窗ω(k,l)滑过。因而,需对线性系数a(k,l)和b(k,l)进行均值处理,即:In the formula, |ω| represents the number of pixels in a rectangular local window ω (k,l) centered on pixel (k,l) and radius r,
Figure BDA0002481223390000055
and μ (k,l) are the variance and mean of the pixels contained in the local window ω (k,l) , respectively,
Figure BDA0002481223390000056
Represents the pixel mean of the multispectral image I component contained in the local window ω (k,l) , and ε is a regularization parameter. Considering that during the process of guided filtering, a pixel point (i, j) may be slid by multiple local windows ω (k, l) at the same time. Therefore, the linear coefficients a (k,l) and b (k,l) need to be averaged, namely:

Figure BDA0002481223390000057
Figure BDA0002481223390000057

Figure BDA0002481223390000061
Figure BDA0002481223390000061

Figure BDA0002481223390000062
Figure BDA0002481223390000063
代入引导滤波的线性定义模型,从而得到滤波后的输出图像。Will
Figure BDA0002481223390000062
and
Figure BDA0002481223390000063
Substitute into the linear definition model of guided filtering to get the filtered output image.

Figure BDA0002481223390000064
Figure BDA0002481223390000064

将引导滤波后的结果作为图像的基础层,通过多光谱图像I分量与基础图像作减法得到图像的细节层,接着对细节层的灰度变化范围进行线性变换,最后与基础图像相加得到纹理结构增强图像。The result after guided filtering is used as the base layer of the image, and the detail layer of the image is obtained by subtracting the I component of the multispectral image from the base image, and then the gray scale variation range of the detail layer is linearly transformed, and finally added to the base image to obtain the texture Structurally enhanced images.

步骤四:将分数阶微分处理后的全色图像和引导滤波处理后的多光谱图像亮度I分量分别进行小波变换,得到变换后的高、低频分量。其中,高频分量采用绝对值取大原则,低频分量采用加权平均原则。Step 4: Perform wavelet transformation on the panchromatic image after fractional differential processing and the luminance I component of the multispectral image after guided filtering processing to obtain transformed high and low frequency components. Among them, the high-frequency component adopts the principle of taking the largest absolute value, and the low-frequency component adopts the weighted average principle.

步骤五:通过小波重构得到小波反变换后的结果图像。Step 5: Obtain the result image after wavelet inverse transformation through wavelet reconstruction.

步骤六:将小波反变换得到的结果作为Inew分量,并与H分量、S分量进行IHS反变换得到融合图像,计算公式为:Step 6: Use the result obtained by wavelet inverse transform as the I new component, and perform IHS inverse transform with the H component and S component to obtain the fused image. The calculation formula is:

Figure BDA0002481223390000065
Figure BDA0002481223390000065

其中,Rnew、Gnew、Bnew分别是融合图像的红、绿、蓝三个波段。Among them, R new , G new , and B new are the red, green, and blue bands of the fused image respectively.

本发明选择已配准好的多光谱图像和全色图像作为测试图像,并与IHS、Brovey、PCA、DWT、ATWT-M3、ATWT、AWLP、GS、HPF及MTF-GLP方法进行对比研究。The present invention selects registered multispectral images and panchromatic images as test images, and conducts comparative studies with IHS, Brovey, PCA, DWT, ATWT-M3, ATWT, AWLP, GS, HPF and MTF-GLP methods.

实验结果如下:The experimental results are as follows:

实验1,不同方法在测试图像上的融合结果如图2所示。经对比分析后发现,本发明方法得到的融合结果在保留图像光谱特征的同时,增强了图像中地物的细节纹理,使得图像的清晰度更高,视觉效果更佳。Experiment 1, the fusion results of different methods on the test image are shown in Figure 2. After comparative analysis, it is found that the fusion result obtained by the method of the present invention not only preserves the spectral features of the image, but also enhances the detailed texture of the ground objects in the image, so that the image has higher definition and better visual effect.

实验2,为提高融合结果质量评价的准确性,本发明方法采用了几种常见的评价指标,包括:平均梯度(AG)、平均值(ME)、标准差(SD)、信息熵(IE)、互信息(MI)及空间频率(SF)。上述评价指标的数值越大,则图像的空间信息越丰富,层次感越强,如表1所示。根据表1可以看出,相比于其他方法,经本发明方法得到的融合结果,在各质量评价指标上均取得了不同程度的提升,具有一定的综合优势。该结果表明本发明方法融合图像的细节更丰富,目视效果更好。Experiment 2, in order to improve the accuracy of the quality evaluation of fusion results, the method of the present invention adopts several common evaluation indicators, including: average gradient (AG), mean value (ME), standard deviation (SD), information entropy (IE) , Mutual Information (MI) and Spatial Frequency (SF). The larger the value of the above evaluation index, the richer the spatial information of the image and the stronger the sense of hierarchy, as shown in Table 1. According to Table 1, it can be seen that compared with other methods, the fusion results obtained by the method of the present invention have achieved different degrees of improvement in various quality evaluation indicators, and have certain comprehensive advantages. The result shows that the method of the present invention has richer details of the fused image and better visual effect.

表1图像融合结果质量评价指标统计表Table 1 Statistical table of quality evaluation indicators of image fusion results

Figure BDA0002481223390000071
Figure BDA0002481223390000071

综上,本发明公开的一种多源遥感图像融合方法,能够有效减少融合结果的光谱失真现象和空间细节信息丢失的问题,具有较高的有效性和可行性。To sum up, the multi-source remote sensing image fusion method disclosed in the present invention can effectively reduce the spectral distortion phenomenon and the loss of spatial detail information of fusion results, and has high effectiveness and feasibility.

上面对本专利的较佳实施方式作了详细说明,但是本专利保护范围并不限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本专利宗旨的前提下做出其他变化,这样应当列入本发明的保护范围。The preferred implementation mode of this patent has been described in detail above, but the scope of protection of this patent is not limited to the above-mentioned implementation mode. Within the knowledge of those of ordinary skill in the art, it can also be done without departing from the purpose of this patent. Other changes should be included in the protection scope of the present invention.

Claims (2)

1.一种多源遥感图像融合方法,其特征在于,包括以下步骤:1. A multi-source remote sensing image fusion method is characterized in that, comprising the following steps: 步骤一:获取同一地物目标的多光谱图像和全色图像,将多光谱图像运用双三次插值法进行上采样,使多光谱图像的尺寸与全色图像保持一致;Step 1: Obtain the multispectral image and the panchromatic image of the same ground object, and use the bicubic interpolation method to upsample the multispectral image so that the size of the multispectral image is consistent with that of the panchromatic image; 步骤二:采用IHS变换对多光谱图像进行色彩空间变换,提取多光谱图像的亮度I分量以备下一步处理,保留多光谱图像的色度H分量与饱和度S分量用作后续的IHS反变换;Step 2: Use IHS transformation to transform the color space of the multispectral image, extract the brightness I component of the multispectral image for further processing, and retain the chroma H component and saturation S component of the multispectral image for subsequent IHS inverse transformation ; 步骤三:构造自适应分数阶微分用于增强全色图像的边缘细节和保留地物轮廓信息,同时,采用引导滤波对多光谱图像的亮度I分量进行滤波处理;Step 3: Construct an adaptive fractional differential to enhance the edge details of the panchromatic image and preserve the contour information of the object, and at the same time, use guided filtering to filter the brightness I component of the multispectral image; 步骤四:将分数阶微分处理后的全色图像和引导滤波处理后的多光谱图像亮度I分量分别进行小波变换,得到变换后的高、低频分量,其中,高频分量采用绝对值取大原则,低频分量采用加权平均原则;Step 4: Perform wavelet transformation on the panchromatic image after fractional differential processing and the brightness I component of the multispectral image after guided filtering processing to obtain the transformed high and low frequency components, where the high frequency component adopts the principle of taking the largest absolute value , the low-frequency component adopts the weighted average principle; 步骤五:通过小波重构得到小波反变换后的结果图像;Step 5: Obtain the result image after wavelet inverse transformation through wavelet reconstruction; 步骤六:将小波反变换得到的结果作为Inew分量,并与H分量、S分量进行IHS反变换,得到融合图像;Step 6: take the result obtained by the inverse wavelet transform as the I new component, and perform IHS inverse transform with the H component and the S component to obtain the fused image; 所述步骤二中得到的亮度I分量、色度H分量、饱和度S分量分别为:The brightness I component, chroma H component, and saturation S component obtained in the step 2 are respectively:
Figure FDA0004107376910000011
Figure FDA0004107376910000011
H=tan-1[s1/s2]H=tan -1 [s 1 /s 2 ]
Figure FDA0004107376910000012
Figure FDA0004107376910000012
其中,R、G、B分别是多光谱图像的红、绿、蓝三个波段;Among them, R, G, and B are the red, green, and blue bands of the multispectral image, respectively; 所述步骤三中采用的自适应分数阶微分方法,具体方式为:The adaptive fractional order differential method adopted in the step 3, the specific method is: 首先,计算全色图像f(i,j)的空间频率和平均梯度,其中,空间频率使用如下公式得到:First, calculate the spatial frequency and average gradient of the panchromatic image f(i,j), where the spatial frequency is obtained using the following formula:
Figure FDA0004107376910000013
Figure FDA0004107376910000013
Figure FDA0004107376910000014
Figure FDA0004107376910000014
Figure FDA0004107376910000015
Figure FDA0004107376910000015
式中,全色图像f(i,j)的大小是M×N,RF与CF分别表示全色图像f(i,j)的行频率和列频率;另外,全色图像f(i,j)平均梯度的计算公式为:In the formula, the size of the panchromatic image f(i,j) is M×N, and RF and CF represent the row frequency and column frequency of the panchromatic image f(i,j) respectively; in addition, the panchromatic image f(i,j ) The formula for calculating the average gradient is:
Figure FDA0004107376910000021
Figure FDA0004107376910000021
接着,采用反余切非线性归一化函数对全色图像的空间频率和平均梯度进行归一化,即:Next, the spatial frequency and average gradient of the panchromatic image are normalized using the arccotangent nonlinear normalization function, namely:
Figure FDA0004107376910000022
Figure FDA0004107376910000022
继续对二者进行如下平均加权处理:Continue to weight the two equally as follows:
Figure FDA0004107376910000023
Figure FDA0004107376910000023
最后,采用Tanh函数对微分阶次v进行如下构造:Finally, the Tanh function is used to construct the differential order v as follows:
Figure FDA0004107376910000024
Figure FDA0004107376910000024
Figure FDA0004107376910000025
Figure FDA0004107376910000025
其中,β和α分别取0.5和0.7;Among them, β and α take 0.5 and 0.7 respectively; 所述步骤三中采用引导滤波对多光谱图像I分量进行滤波处理的具体方式为:In the step 3, the specific method of filtering the I component of the multispectral image by using guided filtering is as follows: 首先,设置引导滤波的半径r=7,正则化参数ε=10-6,计算引导滤波的线性系数a(k,l)和b(k,l)的数值分别为:First, set the radius of guided filtering r=7, regularization parameter ε=10 -6 , and calculate the linear coefficients a (k,l) and b (k,l) of guided filtering as follows:
Figure FDA0004107376910000026
Figure FDA0004107376910000026
Figure FDA0004107376910000027
Figure FDA0004107376910000027
式中,|ω|表示以像素点(k,l)为中心、r为半径的矩形局部窗ω(k,l)内的像素个数,
Figure FDA0004107376910000028
和μ(k,l)分别是局部窗ω(k,l)所含像素的方差和均值,
Figure FDA0004107376910000029
代表局部窗ω(k,l)所含多光谱图像I分量的像素均值,ε是正则化参数;
In the formula, |ω| represents the number of pixels in a rectangular local window ω (k,l) centered on pixel (k,l) and radius r,
Figure FDA0004107376910000028
and μ (k,l) are the variance and mean of the pixels contained in the local window ω (k,l) , respectively,
Figure FDA0004107376910000029
Represents the pixel mean value of the multispectral image I component contained in the local window ω (k,l) , ε is a regularization parameter;
再对线性系数a(k,l)和b(k,l)进行均值处理,即:Then perform mean value processing on the linear coefficients a (k,l) and b (k,l) , namely:
Figure FDA00041073769100000210
Figure FDA00041073769100000210
Figure FDA00041073769100000211
Figure FDA00041073769100000211
Figure FDA00041073769100000212
Figure FDA00041073769100000213
代入引导滤波的线性定义模型,得到滤波后的输出图像:
Bundle
Figure FDA00041073769100000212
and
Figure FDA00041073769100000213
Substitute into the linear definition model of guided filtering to get the filtered output image:
Figure FDA0004107376910000031
Figure FDA0004107376910000031
将引导滤波后的结果作为图像的基础层,通过多光谱图像I分量与基础图像作减法得到图像的细节层,接着对细节层的灰度变化范围进行线性变换,最后与基础图像相加得到纹理结构增强图像。The result after guided filtering is used as the base layer of the image, and the detail layer of the image is obtained by subtracting the I component of the multispectral image from the base image, and then the gray scale variation range of the detail layer is linearly transformed, and finally added to the base image to obtain the texture Structurally enhanced images.
2.根据权利要求1所述的一种多源遥感图像融合方法,其特征在于,所述步骤六将得到的Inew分量与H分量、S分量进行IHS反变换,最终融合图像的计算公式为:2. a kind of multi-source remote sensing image fusion method according to claim 1, is characterized in that, described step 6 carries out IHS inverse transformation with the I new component that obtains and H component, S component, and the computing formula of final fusion image is :
Figure FDA0004107376910000032
Figure FDA0004107376910000032
其中,Rnew、Gnew、Bnew分别是融合图像的红、绿、蓝三个波段。Among them, R new , G new , and B new are the red, green, and blue bands of the fused image respectively.
CN202010378705.2A 2020-05-07 2020-05-07 Multisource remote sensing image fusion method Expired - Fee Related CN111563866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010378705.2A CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010378705.2A CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Publications (2)

Publication Number Publication Date
CN111563866A CN111563866A (en) 2020-08-21
CN111563866B true CN111563866B (en) 2023-05-12

Family

ID=72070788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010378705.2A Expired - Fee Related CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Country Status (1)

Country Link
CN (1) CN111563866B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330581B (en) * 2020-11-02 2022-07-12 燕山大学 Fusion method and system of SAR and multispectral image
CN113992838A (en) * 2021-08-09 2022-01-28 中科联芯(广州)科技有限公司 Imaging focusing method and control method of silicon-based multispectral signal
CN114897706B (en) * 2021-09-23 2025-05-09 武汉九天高分遥感技术有限公司 A green vegetation enhancement method based on panchromatic and multispectral image fusion
CN114897757B (en) * 2022-06-10 2024-06-25 大连民族大学 NSST and parameter self-adaptive PCNN-based remote sensing image fusion method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851091A (en) * 2015-04-28 2015-08-19 中山大学 Remote sensing image fusion method based on convolution enhancement and HCS transform
CN108921809A (en) * 2018-06-11 2018-11-30 上海海洋大学 Multispectral and panchromatic image fusion method under integral principle based on spatial frequency
CN109993717A (en) * 2018-11-14 2019-07-09 重庆邮电大学 A Remote Sensing Image Fusion Method Combining Guided Filtering and IHS Transform

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216557B (en) * 2007-12-27 2011-07-20 复旦大学 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
KR100944462B1 (en) * 2008-03-07 2010-03-03 한국항공우주연구원 Satellite image fusion method and system
CN101930604B (en) * 2010-09-08 2012-03-28 中国科学院自动化研究所 Panchromatic image and multispectral image fusion method based on low frequency correlation analysis
CN103679661B (en) * 2013-12-25 2016-09-28 北京师范大学 A kind of self adaptation remote sensing image fusion method based on significance analysis
CN104346790B (en) * 2014-10-30 2017-06-20 中山大学 A kind of remote sensing image fusion method of HCS combined with wavelet transformed
CN104851077B (en) * 2015-06-03 2017-10-13 四川大学 A kind of panchromatic sharpening method of adaptive remote sensing images
CN105741252B (en) * 2015-11-17 2018-11-16 西安电子科技大学 Video image grade reconstruction method based on rarefaction representation and dictionary learning
CN106023129A (en) * 2016-05-26 2016-10-12 西安工业大学 Infrared and visible light image fused automobile anti-blooming video image processing method
US10176966B1 (en) * 2017-04-13 2019-01-08 Fractilia, Llc Edge detection system
CN108874857A (en) * 2018-04-13 2018-11-23 重庆三峡学院 A kind of local records document is compiled and digitlization experiencing system
CN109166089A (en) * 2018-07-24 2019-01-08 重庆三峡学院 The method that a kind of pair of multispectral image and full-colour image are merged

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851091A (en) * 2015-04-28 2015-08-19 中山大学 Remote sensing image fusion method based on convolution enhancement and HCS transform
CN108921809A (en) * 2018-06-11 2018-11-30 上海海洋大学 Multispectral and panchromatic image fusion method under integral principle based on spatial frequency
CN109993717A (en) * 2018-11-14 2019-07-09 重庆邮电大学 A Remote Sensing Image Fusion Method Combining Guided Filtering and IHS Transform

Also Published As

Publication number Publication date
CN111563866A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN111563866B (en) Multisource remote sensing image fusion method
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN108921809B (en) Multispectral and panchromatic image fusion method based on spatial frequency under integral principle
CN109191390A (en) A kind of algorithm for image enhancement based on the more algorithm fusions in different colours space
JP2011216083A (en) Method for processing digital image, method for zooming digital input image, and method for smoothing digital input image
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
CN102446351A (en) Multispectral and high-resolution full-color image fusion method study
Liu et al. Low-light video image enhancement based on multiscale retinex-like algorithm
CN106651817A (en) Non-sampling contourlet-based image enhancement method
CN115457359A (en) PET-MRI Image Fusion Method Based on Adaptive Adversarial Generative Network
CN105225213B (en) A kind of Color Image Fusion method based on S PCNN and laplacian pyramid
CN106485674A (en) A kind of low light image Enhancement Method based on integration technology
CN117252773A (en) Image enhancement method and system based on adaptive color correction and guided filtering
Liang et al. Learning to remove sandstorm for image enhancement
CN116596793A (en) Low-light image enhancement method based on U-shaped network and attention mechanism
CN111754433A (en) A dehazing method for aerial images
CN111311503A (en) A low-brightness image enhancement system at night
CN112435184B (en) Image recognition method for haze days based on Retinex and quaternion
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
CN107169946A (en) Image interfusion method based on non-negative sparse matrix Yu hypersphere color transformation
CN116883799A (en) Hyperspectral image depth space spectrum fusion method guided by component replacement model
CN103198456B (en) Remote sensing image fusion method based on directionlet domain hidden Markov tree (HMT) model
CN111915500A (en) Foggy day image enhancement method based on improved Retinex algorithm
CN114331936A (en) Remote sensing image fusion method based on wavelet decomposition and improved IHS algorithm
CN115294001A (en) Night light remote sensing image fusion method for improving IHS and wavelet transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230512