CN109285133A - A spatiotemporal spectral integration fusion method for remote sensing image data with enhanced detail - Google Patents

A spatiotemporal spectral integration fusion method for remote sensing image data with enhanced detail Download PDF

Info

Publication number
CN109285133A
CN109285133A CN201811142766.8A CN201811142766A CN109285133A CN 109285133 A CN109285133 A CN 109285133A CN 201811142766 A CN201811142766 A CN 201811142766A CN 109285133 A CN109285133 A CN 109285133A
Authority
CN
China
Prior art keywords
image
fusion
band
objective function
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811142766.8A
Other languages
Chinese (zh)
Inventor
林连雷
王建峰
杨京礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN201811142766.8A priority Critical patent/CN109285133A/en
Publication of CN109285133A publication Critical patent/CN109285133A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种细节增强的遥感图像数据时空谱一体化融合方法,它属于遥感图像融合技术领域。本发明解决了现有的遥感图像数据融合方法在平滑图像噪声的同时,也会平滑掉图像细节信息的问题。本发明首先采用拉普拉斯算子对Y和Z进行边缘增强,因此能够在平滑图像噪声的同时保留图像细节信息;然后利用融合图像与Y′之间的空间退化模型和融合图像与Z′之间的时空谱关系模型,计算Y′对融合图像的一致性约束、Z′对融合图像的一致性约束和图像空间关系的描述,并得到目标函数;最后通过共轭梯度算法求解目标函数,利用求解结果计算出融合图像。与现有方法相比,本发明的方法在平滑掉图像噪声的同时,可以保留95%以上的图像细节信息。本发明可以应用于遥感图像融合技术领域用。

A detail-enhanced remote sensing image data spatiotemporal spectrum integration fusion method belongs to the technical field of remote sensing image fusion. The invention solves the problem that the existing remote sensing image data fusion method can smooth out the image detail information while smoothing the image noise. The present invention firstly uses the Laplacian operator to enhance the edges of Y and Z, so that the image noise can be smoothed while retaining the image detail information; then the space degradation model between the fusion image and Y' and the fusion image and Z' The spatial-temporal spectral relationship model between the two, calculate the consistency constraint of Y' on the fused image, the consistency constraint of Z' on the fused image and the description of the image space relationship, and obtain the objective function; finally, the objective function is solved by the conjugate gradient algorithm, The fusion image is calculated using the solution result. Compared with the existing method, the method of the present invention can retain more than 95% of the image detail information while smoothing out the image noise. The invention can be applied to the technical field of remote sensing image fusion.

Description

A kind of remote sensing image data Temporal Spectral integral fusion method of details enhancing
Technical field
The invention belongs to Remote sensing image fusion fields, and in particular to a kind of remote sensing image data Temporal Spectral integration is melted Conjunction method.
Background technique
Existing remote sensing image fusion method use three kinds of different levels, respectively data-level, feature rank and certainly Plan rank, according to the difference of target, remote sensing image fusion can be divided into more visual space fusions, the fusion of sky spectrum and temporal-spatial fusion.
Most of existing Data Fusion for Remote Sensing Image methods can be only applied to integrate space, time and spectral resolution its In two indices, therefore highest time, space, spectral resolution blending image cannot be obtained.In addition, some remote sensing images Data fusion method is intended to merge the supplemental information from one or two sensor, cannot make full use of and come from more multisensor Complementary observation information.In the development of integral fusion theoretical method, domestic scholars are made that larger contribution.It was opened from 2010 Begin, Wuhan University starts correlative study under the subsidy of state natural sciences fund, has been put forward for the first time Temporal Spectral integral fusion Concept with based on maximum a posteriori probability it is theoretical merge frame, and achieve certain research achievement.Hong Kong Chinese University pair Remote sensing image data when, sky, spectrum correlation model carried out further exploration, it is equally theoretical based on MAP estimation, propose A kind of new Temporal Spectral one fusion method, but still in terms of being the fusion for two sensing datas.Meng et al., Wu Et al. achieve impressive progress in terms of 3 sensor fusions, but when without considering comprehensively, sky, spectrum signature.Meng et al. later Further provide a unified fusion frame, can either effective integration when, sky, spectrum complementary information, while eliminating sensing The limitation of device quantity, however the prior model of this method insertion is Gauss-Markov model, although effectively can smoothly scheme As noise, but the model can also smooth out image detail information, and then affect the detailed information of blending image, and image is caused to clap The visual effect taken the photograph is bad.
Summary of the invention
The purpose of the present invention is being the existing Data Fusion for Remote Sensing Image method of solution while smoothed image noise, The problem of image detail information can be smoothed out.
The technical solution adopted by the present invention to solve the above technical problem is:
A kind of remote sensing image data Temporal Spectral integral fusion method of details enhancing, method includes the following steps:
Step 1: being obtained to multiple view as Y and auxiliary multi-source observation image Z progress edge enhancing using Laplace operator To enhanced multiple view as Y ' and auxiliary multi-source observe image Z ';
Step 2: establishing the Temporal Spectral between the space degradation model between blending image x and Y ' and blending image x and Z ' Relational model calculates consistency constraint p (Z ' of the Y ' to the consistency constraint p of blending image x (Y ' | x), Z ' to blending image x | x) and the description p (x) of image space relationship;
Step 3: obtaining the expression of objective function F (x) according to the calculated p of step 2 (Y ' | x), p (Z ' | x) and p (x) Formula;
Step 4: solving objective function F (x) by conjugate gradient algorithms, and utilize the solving result of objective function F (x) Calculate blending image x.
The beneficial effects of the present invention are: the present invention provides a kind of remote sensing image data Temporal Spectral integrations of details enhancing Fusion method, the present invention use Laplace operator to carry out edge enhancing to multiple view picture and auxiliary multi-source observation image first, Therefore it can retain image detail information while smoothed image noise;Then blending image and enhanced multiple view are utilized Temporal Spectral relational model between space degradation model and blending image as between and auxiliary multi-source observation image, calculates enhancing Multiple view picture afterwards observes image to the consistency constraint and figure of blending image to consistency constraint, the auxiliary multi-source of blending image The description of image space relationship, and further obtain objective function;Objective function is solved finally by conjugate gradient algorithms, utilizes mesh The solving result of scalar functions calculates blending image.Compared with the conventional method, method of the invention is smoothing out picture noise Meanwhile can retain 95% or more image detail information.
Detailed description of the invention
Fig. 1 is a kind of remote sensing image data Temporal Spectral integral fusion method flow diagram of details enhancing of the invention;
Specific embodiment
Specific embodiment 1: as shown in Figure 1, described in present embodiment when a kind of remote sensing image data that details enhances Sky spectrum integral fusion method, method includes the following steps:
Step 1: being obtained to multiple view as Y and auxiliary multi-source observation image Z progress edge enhancing using Laplace operator To enhanced multiple view as Y ' and auxiliary multi-source observe image Z ';
Step 2: establishing the Temporal Spectral between the space degradation model between blending image x and Y ' and blending image x and Z ' Relational model calculates consistency constraint p (Z ' of the Y ' to the consistency constraint p of blending image x (Y ' | x), Z ' to blending image x | x) and the description p (x) of image space relationship;
Step 3: obtaining the expression of objective function F (x) according to the calculated p of step 2 (Y ' | x), p (Z ' | x) and p (x) Formula;
Step 4: solving objective function F (x) by conjugate gradient algorithms, and utilize the solving result of objective function F (x) Calculate blending image x.
Specific embodiment 2: the present embodiment is different from the first embodiment in that: the detailed process of step 1 are as follows:
Multiple view is as Y={ y1,...,yk... }, ykIt is k-th of observed image, K is the sum of multiple view picture;In order to obtain The high-resolution blending image at k moment is taken, on condition that being carved with observed image y in given time Kk, in general, image Y With higher spectrum and temporal resolution, but spatial resolution is lower.It is therefore believed that Y is the image that space is degenerated.Phase The x that corresponding enhancing fusion process obtains is the image with higher spatial resolution.
Multi-source is assisted to observe image Z={ z1,z2...,zn,...zN, in which: znN-th image is represented, N is auxiliary multi-source Observe the sum of image;A usually full-colour image is multispectral (MS), have higher spatial resolution, but spectrum or when Between resolution ratio it is lower.Integral fusion frame can complete different types of fusion task, including multiple view Space integration, spatial spectrum Fusion, temporal-spatial fusion, Temporal Spectral fusion.
Laplace operator (Laplace) is an Order Linear Differential Operator, compared with linear first-order differential operator, two The edge stationkeeping ability of rank differential is stronger, and it is more preferable to sharpen effect.Assuming that multiple view is two dimensional image as Y, the function expression of Y is F (x, y), x, y are the abscissa and ordinate of two dimensional image, Laplace operator definitions respectively are as follows:
It is all linear operator for Differential Operator with Any Order, so Second Order Differential Operator and subsequent first order differential operator are all It can be obtained a result with the mode for generating template and then convolution.Had according to the definition of differential:
Formula (2), (3) are combined to obtain with Laplace operator definition:
Laplace operator is expressed as to the form of templateTo program needs, then Laplace operator Expansion templates be
If can be seen that in a darker region in the picture from template form and a bright spot occur, make This bright spot can be made to become brighter with Laplace operator operation.Because the edge in image is exactly that those gray scales jump Region, so laplacian spectral radius template edge details enhancing during have great role.
Finally then the weighted difference for calculating pixel and its neighboring pixel is added in original image again, after obtaining details enhancing Image.Then function expression g (x, y) of the enhanced multiple view as Y ' are as follows:
Wherein: c is weight coefficient;The value of c and template definition above have relationship, when template center's numerical value takes timing, c =-1, opposite c=1;
Similarly, enhanced auxiliary multi-source observation image Z ' is obtained.
Specific embodiment 3: present embodiment is unlike specific embodiment two: the detailed process of step 2 are as follows:
Space degradation model between blending image x and Y' is expressed as follows:
y′k,b=DSk,bMkxb+vk,b,1≤b≤Bx,1≤k≤K (6)
y′k,bIndicate the degeneration observed image of b-th of wave band of k-th of image of Y ', xbRepresent b-th of blending image x Wave band, BxFor the sum of wave band, MkIndicate kinematic matrix, Sk,bIndicate optical dimming matrix, D is down-sampling matrix, vk,bIt is by passing Zero-mean Gaussian noise caused by sensor and external environment;
Formula (6) is simplified, then space degradation model indicates are as follows:
y′k,b=Ay,k,bxb+vk,b (7)
Wherein: Ay,k,bIndicate the space degenerate matrix image between blending image x and Y ', Ay,k,b=DSk,bMk
Temporal Spectral relational model between blending image x and Z ' is expressed as follows:
z′n,qn,qCn,qAz,n,qxqn,q+vn,q,1≤q≤Bz,n,1≤n≤N (8)
z′n,qIndicate the degeneration observed image of q-th of wave band of the n-th width image of Z ', xqRepresent q-th of blending image x Band, Bz,nFor the sum of band, Az,n,qIndicate the space degenerate matrix image between blending image x and Z ', Cn,qIt indicates Spectral correlations matrix, ψn,qFor Time correlation matrix, τn,qFor hour offset amount, vn,qRepresent zero-mean sensor noise;
The then estimated value of blending image xIt is expressed as follows:
P (x | Y ', Z ') blending image x is represented to the consistency constraint of Y' and Z ';P is worked as in representative When (x | Y ', Z ') is maximized, the value of corresponding x;
It is obtained according to Bayesian formula:
Wherein: p (Y ' | x) indicates Y' to the consistency constraint of blending image x, and p (Z ' | x) indicates Z ' to blending image x's Consistency constraint, the description of p (x) representative image spatial relationship;
Assuming that zero-mean Gaussian noise v caused by sensor and external environmentk,bRandom Gaussian distribution is obeyed, then p (Y ' | x) It is expressed as follows:
Wherein: p (y 'k,b|xb) indicate Y ' k-th of image b-th of wave band degeneration observed image to blending image x B-th of wave band consistency constraint;ay,k,bIt is vk,bVariance, BxIt is spectral band number, φ1φ2Indicate y 'k,bSpace dimension Degree, | | | |2Indicate 2 norms;
Assuming that vn,qRandom Gaussian distribution is obeyed, then p (Z ' | x) is expressed as follows:
Wherein: p (z 'n,q|xq) indicate Z ' the n-th width image q-th of wave band to the one of q-th of wave band of blending image x The constraint of cause property;az,n,qIndicate noise vn,qVariance, H1H2Indicate z 'n,qSpatial Dimension;
Third probability density function p (x) is Image Priori Knowledge, for describing the spatial relationship of image.We introduce Huber-Markov prior model.Compared to Gauss-Markov model, which can eliminate grass in fusion process The wild effect come, and keep image edge, i.e., details enhances.The adaptive of three-dimensional space spectrum based on Laplace operator adds Power, then p (x) is indicated are as follows:
Wherein: ax,bIt is the noise v of random Gaussian distributionx,bVariance, L1L2Representation space dimension, ρ () are Huber letter Number, QxbIndicate adaptive weighted three-dimensional Laplace operator matrix.And QxbIt indicates are as follows:
It is the initial estimate of the blending image obtained according to resampling, β is parameter, it is indicated by (21):
Wherein μ is threshold parameter.Assuming that blending image only has several spectral bands, the curve of spectrum be it is discontinuous, because It is feasible that this, which enables β=0,.On the contrary, if required image is a high spectrum image, it can be assumed that the curve of spectrum is continuous , therefore, adaptive weighted priori item can effectively keep the curve of spectrum, reduce spectrum distortion.Assuming that different light The difference very little between wave band is composed, spectral constraints are stronger.Indicate the gradient of the spectral Dimensions of b-th of wave band.
Specific embodiment 4: present embodiment is unlike specific embodiment three: the detailed process of step 3 are as follows:
Formula (11), (13) and (15) are substituted into formula (10), are handled according to the monotonicity of logarithmic function, then target Function F (x) is expressed as canonical minimization problem, i.e., according to the monotonicity of logarithmic function and carry out simplify processing, many parameters are all It can delete, last objective function can be expressed as canonical minimization problem, the estimated value of blending image xExpression formula are as follows:
Wherein:It indicates when F (x) is minimized, the value of corresponding x is estimated value
Wherein first item indicates the consistency constraint between x and y ', and Section 2 indicates the relationship between x and z ', Section 3 Indicate prior image.λ1And λ2Respectively indicate the weight coefficient of each section, λ1And λ2It is related to noise variance, ωn,qIndicate z 'n,q Contribution to blending image x;It is adaptive polo placement ωn,q=λ 'n,qUnIt obtains.UnIt is by z 'nIt is obtained with the correlation calculations of x It arrives.It assumes that correlation is bigger, contributes bigger.Auxiliary parameter λ 'n,qTo be adaptively adjusted each band of blending image Spatial detail balance, is expressed as the blending image of each wave band, is expressed as
λ′n,q=f (zn,q,x)/min[f(z'1,2,x),...f(z'1,q,x)...f(z'n,q,x)]
Wherein f (z'n,q, x) and indicate the z' mergedn,qBand quantity.λ 1 and λ 2 are related to noise variance, and parameter is used to adjust Save the specific gravity of three parts.
Specific embodiment 5: present embodiment is unlike specific embodiment four: the detailed process of step 4 are as follows:
The process for solving objective function F (x) by conjugate gradient algorithms is as follows:
To the x of objective function F (x)bDerivation obtains:
It indicates to objective function F (xb) derivation,ForTransposition,Indicate the b of blending image x The corresponding spectral correlations matrix of a wave band;
The estimated value of b-th of wave band of blending image x is obtained by subsequent iteration:
xb,d+1=xbdeb,d (19)
Wherein, xb,d+1Indicate b-th of wave band of blending image x after the d+1 times iteration, eb,dIndicate the search of the d times iteration The initial value in direction, the direction of search isI.e. for the 1st iteration, the direction of search isIt is next to search The direction of search of the Suo Fangxiang to current iteration and before is related, θdFor iteration step length;
F(xb)dFor the corresponding objective function of the d times iteration;The then direction of search e of the d+1 times iterationb,d+1Are as follows:
Intermediate variable
Utilize the direction of search e of the d+1 times iterationb,d+1Calculate b-th of wave band of blending image x after the d+2 times iteration xb,d+2, utilize xb,d+2Calculate F (xb)d+2
And so on, calculate the corresponding objective function of each iteration;It is calculated using the solving result of objective function F (x) The detailed process of blending image x are as follows: the corresponding objective function of each iteration is substituted into formula (16) respectively, calculates each iteration The corresponding blending image x of objective function b-th of wave band estimated value, untilWhen, then stop Only iteration, willThe estimated value of b-th of wave band as blending image x, wherein ζ is iteration ends threshold value;
Similarly, the estimated value for calculating remaining wave band of blending image x obtains blending image x.

Claims (5)

1.一种细节增强的遥感图像数据时空谱一体化融合方法,其特征在于,该方法包括以下步骤:1. a detail-enhanced remote sensing image data spatiotemporal spectrum integration fusion method, is characterized in that, this method may further comprise the steps: 步骤一、采用拉普拉斯算子对多视图像Y和辅助多源观察图像Z进行边缘增强,得到增强后的多视图像Y′和辅助多源观察图像Z′;Step 1, using the Laplacian operator to perform edge enhancement on the multi-view image Y and the auxiliary multi-source observation image Z, to obtain the enhanced multi-view image Y' and the auxiliary multi-source observation image Z'; 步骤二、建立融合图像x与Y′之间的空间退化模型和融合图像x与Z′之间的时空谱关系模型,计算出Y′对融合图像x的一致性约束p(Y′|x),以及Z′对融合图像x的一致性约束p(Z′|x)和图像空间关系的描述p(x);Step 2: Establish the spatial degradation model between the fused image x and Y' and the spatiotemporal spectral relationship model between the fused image x and Z', and calculate the consistency constraint p(Y'|x) of Y' to the fused image x , and the description p(x) of the consistency constraint p(Z′|x) of Z′ to the fused image x and the spatial relationship of the image; 步骤三、根据步骤二计算出的p(Y′|x)、p(Z′|x)和p(x),得到目标函数F(x)的表达式;Step 3: Obtain the expression of the objective function F(x) according to p(Y'|x), p(Z'|x) and p(x) calculated in step 2; 步骤四、通过共轭梯度算法求解目标函数F(x),并利用目标函数F(x)的求解结果计算融合图像x。In step 4, the objective function F(x) is solved by the conjugate gradient algorithm, and the fusion image x is calculated by using the solution result of the objective function F(x). 2.根据权利要求1所述的一种细节增强的遥感图像数据时空谱一体化融合方法,其特征在于,所述步骤一的具体过程为:2. a kind of remote sensing image data space-time spectrum integration fusion method of detail enhancement according to claim 1, is characterized in that, the concrete process of described step 1 is: 多视图像Y={y1,y2...,yk,...yK},其中yk是第k个观测图像,K是多视图像的总数;辅助多源观察图像Z={z1,z2...,zn,...zN},其中:zn代表第n个图像,N是辅助多源观察图像的总数;Multi-view image Y = {y 1 , y 2 ..., y k , ... y K }, where y k is the kth observation image, K is the total number of multi-view images; auxiliary multi-source observation image Z = {z 1 ,z 2 ...,z n ,...z N }, where: z n represents the nth image, and N is the total number of auxiliary multi-source observation images; 拉普拉斯算子是一个二阶线性微分算子,假设多视图像Y为二维图像,Y的函数表达式为f(x,y),则拉普拉斯算子定义为:The Laplacian operator is a second-order linear differential operator. Assuming that the multi-view image Y is a two-dimensional image and the function expression of Y is f(x,y), the Laplacian operator is defined as: 根据微分的定义有:According to the definition of differential there are: 将公式(2)、(3)与拉普拉斯算子定义相结合得到:Combining formulas (2) and (3) with the definition of Laplace operator, we get: 则增强后的多视图像Y′的函数表达式g(x,y)为:Then the functional expression g(x, y) of the enhanced multi-view image Y' is: 其中:c为权重系数;Among them: c is the weight coefficient; 同理,得到增强后的辅助多源观察图像Z′。Similarly, an enhanced auxiliary multi-source observation image Z′ is obtained. 3.根据权利要求2所述的一种细节增强的遥感图像数据时空谱一体化融合方法,其特征在于,所述步骤二的具体过程为:3. a kind of remote sensing image data space-time spectrum integration fusion method of detail enhancement according to claim 2, is characterized in that, the concrete process of described step 2 is: 融合图像x与Y'之间的空间退化模型表示如下:The spatial degradation model between the fused images x and Y' is expressed as follows: y′k,b=DSk,bMkxb+vk,b,1≤b≤Bx,1≤k≤K (6)y′ k,b =DS k,b M k x b +v k,b ,1≤b≤B x ,1≤k≤K (6) y′k,b表示Y′的第k个图像的第b个波段的退化观测图像,xb代表融合图像x的第b个波段,Bx为波段的总数,Mk表示运动矩阵,Sk,b表示光学模糊矩阵,D是下采样矩阵,vk,b是由传感器和外界环境引起的零均值高斯噪声;y' k,b represents the degraded observation image of the b-th band of the k-th image of Y', x b represents the b-th band of the fusion image x, B x is the total number of bands, M k represents the motion matrix, S k ,b represents the optical blur matrix, D is the downsampling matrix, v k,b is the zero-mean Gaussian noise caused by the sensor and the external environment; 对公式(6)进行简化,则空间退化模型表示为:Simplifying formula (6), the spatial degradation model is expressed as: y′k,b=Ay,k,bxb+vk,b (7)y′ k,b =A y,k,b x b +v k,b (7) 其中:Ay,k,b表示融合图像x和Y′之间的空间退化矩阵图像,Ay,k,b=DSk,bMkwhere: A y,k,b represents the spatial degradation matrix image between the fusion image x and Y′, A y,k,b =DS k,b M k ; 融合图像x与Z′之间的时空谱关系模型表示如下:The spatiotemporal spectral relationship model between the fused images x and Z′ is expressed as follows: z′n,q=ψn,qCn,qAz,n,qxqn,q+vn,q,1≤q≤Bz,n,1≤n≤N (8)z′ n,qn,q C n,q A z,n,q x qn,q +v n,q ,1≤q≤B z,n ,1≤n≤N (8) z′n,q表示Z′的第n幅图像的第q个波段的退化观测图像,xq代表融合图像x的第q个光谱带,Bz,n为光谱带的总数,Az,n,q表示融合图像x和Z′之间的空间退化矩阵图像,Cn,q表示光谱相关性矩阵,ψn,q为时间相关矩阵,τn,q为时偏移量,vn,q代表零均值传感器噪声;z′ n,q represents the degraded observation image of the qth band of the nth image of Z′, x q represents the qth spectral band of the fusion image x, B z,n is the total number of spectral bands, A z,n ,q represents the spatial degradation matrix image between the fusion image x and Z′, C n,q represents the spectral correlation matrix, ψ n,q is the time correlation matrix, τ n,q is the time offset, v n,q represents zero-mean sensor noise; 则融合图像x的估计值表示如下:Then the estimated value of the fused image x It is expressed as follows: p(x|Y′,Z′)代表融合图像x对Y'和Z′的一致性约束;代表当p(x|Y′,Z′)取最大值时,对应的x的取值;p(x|Y', Z') represents the consistency constraint of fused image x on Y' and Z'; Represents the value of the corresponding x when p(x|Y', Z') takes the maximum value; 根据贝叶斯公式得:According to the Bayesian formula: 其中:p(Y′|x)表示Y'对融合图像x的一致性约束,p(Z′|x)表示Z′对融合图像x的一致性约束,p(x)代表图像空间关系的描述;Among them: p(Y'|x) represents the consistency constraint of Y' on the fused image x, p(Z'|x) represents the consistency constraint of Z' on the fused image x, and p(x) represents the description of the spatial relationship of the image ; 假设传感器和外界环境引起的零均值高斯噪声vk,b服从随机高斯分布,则p(Y′|x)表示如下:Assuming that the zero-mean Gaussian noise v k,b caused by the sensor and the external environment obeys a random Gaussian distribution, then p(Y′|x) is expressed as follows: 其中:p(y′k,b|xb)表示Y′的第k个图像的第b个波段的退化观测图像对融合图像x的第b个波段的一致性约束;ay,k,b是噪声vk,b的方差,φ1φ2表示y′k,b的空间维度,||·||2表示2范数;where: p(y′ k,b |x b ) represents the consistency constraint of the degraded observation image of the b-th band of the k-th image of Y′ to the b-th band of the fusion image x; a y,k,b is the variance of the noise v k,b , φ 1 φ 2 represents the spatial dimension of y′ k,b , ||·|| 2 represents the 2 norm; 假设vn,q服从随机高斯分布,则p(Z′|x)表示如下:Assuming that v n, q obey a random Gaussian distribution, then p(Z′|x) is expressed as follows: 其中:表示Z′的第n幅图像的第q个波段对融合图像x的第q个波段的一致性约束;az,n,q表示噪声vn,q的方差,H1H2表示z′n,q的空间维度;in: Represents the consistency constraint of the qth band of the nth image of Z' to the qth band of the fused image x; a z,n,q represents the variance of the noise v n,q , H 1 H 2 represents z' n , the spatial dimension of q ; 基于拉普拉斯算子的三维空谱的自适应加权,则p(x)表示为:The adaptive weighting of the three-dimensional space spectrum based on the Laplacian operator, then p(x) is expressed as: 其中:ax,b是随机高斯分布的噪声vx,b的方差,L1L2表示空间维度,ρ(·)为Huber函数,Qxb表示自适应加权三维拉普拉斯算子矩阵。Where: a x,b is the variance of the random Gaussian distributed noise v x,b , L 1 L 2 represents the spatial dimension, ρ( ) is the Huber function, Qx b represents the adaptive weighted three-dimensional Laplacian operator matrix. 4.根据权利要求3所述的一种细节增强的遥感图像数据时空谱一体化融合方法,其特征在于,所述步骤三的具体过程为:4. a kind of detail-enhanced remote sensing image data spatiotemporal spectrum integration fusion method according to claim 3, is characterized in that, the concrete process of described step 3 is: 将公式(11)、(13)和(15)代入公式(10),则目标函数F(x)表示为正则最小化问题:Substituting formulas (11), (13) and (15) into formula (10), the objective function F(x) is expressed as a regular minimization problem: 其中:表示当F(x)取最小值时,对应的x的取值为估计值 in: Indicates that when F(x) takes the minimum value, the corresponding value of x is the estimated value λ1和λ2分别表示各部分的权重系数,ωn,q表示z′n,q对融合图像x的贡献。λ 1 and λ 2 respectively represent the weight coefficients of each part, and ω n, q represent the contribution of z′ n, q to the fused image x. 5.根据权利要求4所述的一种细节增强的遥感图像数据时空谱一体化融合方法,其特征在于,所述步骤四的具体过程为:5. a kind of remote sensing image data spatiotemporal spectrum integration fusion method of detail enhancement according to claim 4, is characterized in that, the concrete process of described step 4 is: 对目标函数F(x)的xb求导得:Derivation of x b of the objective function F(x) gives: 表示对目标函数F(xb)求导,为Ay,k,b的转置,表示融合图像x的第b个波段对应的光谱相关性矩阵; represents the derivation of the objective function F(x b ), is the transpose of A y,k,b , represents the spectral correlation matrix corresponding to the b-th band of the fusion image x; 融合图像x的第b个波段的估计值通过连续迭代得到:The estimated value of the b-th band of the fused image x is obtained by successive iterations: xb,d+1=xbdeb,d (19)x b,d+1 = x b +θd e b,d ( 19) 其中,xb,d+1表示第d+1次迭代后融合图像x的第b个波段,eb,d表示第d次迭代的搜索方向,搜索方向的初始值为θd为迭代步长;Among them, x b, d+1 represents the b-th band of the fusion image x after the d+1-th iteration, e b, d represents the search direction of the d-th iteration, and the initial value of the search direction is θd is the iterative step size; F(xb)d为第d次迭代对应的目标函数;则第d+1次迭代的搜索方向eb,d+1为:F(x b ) d is the objective function corresponding to the d-th iteration; then the search direction e b,d+1 of the d+1-th iteration is: 中间变量 Intermediate variables 利用第d+1次迭代的搜索方向eb,d+1计算出第d+2次迭代后融合图像x的第b个波段xb,d+2,再利用xb,d+2计算出F(xb)d+2Use the search direction e b,d+1 of the d+1th iteration to calculate the b-th band x b,d+2 of the fusion image x after the d+2th iteration, and then use x b,d+2 to calculate F(x b ) d+2 ; 以此类推,计算出每次迭代对应的目标函数;将每次迭代对应的目标函数分别代入公式(16),计算出每次迭代的目标函数对应的融合图像x的第b个波段的估计值,直至时停止迭代,将作为融合图像x的第b个波段的估计值,其中ζ为迭代终止阈值;By analogy, the objective function corresponding to each iteration is calculated; the objective function corresponding to each iteration is substituted into formula (16), and the estimated value of the bth band of the fusion image x corresponding to the objective function of each iteration is calculated. , until stop iterating when As the estimated value of the b-th band of the fusion image x, where ζ is the iteration termination threshold; 同理,计算出融合图像x的其余波段的估计值,得到融合图像x。Similarly, the estimated values of the remaining bands of the fused image x are calculated to obtain the fused image x.
CN201811142766.8A 2018-09-28 2018-09-28 A spatiotemporal spectral integration fusion method for remote sensing image data with enhanced detail Pending CN109285133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811142766.8A CN109285133A (en) 2018-09-28 2018-09-28 A spatiotemporal spectral integration fusion method for remote sensing image data with enhanced detail

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811142766.8A CN109285133A (en) 2018-09-28 2018-09-28 A spatiotemporal spectral integration fusion method for remote sensing image data with enhanced detail

Publications (1)

Publication Number Publication Date
CN109285133A true CN109285133A (en) 2019-01-29

Family

ID=65182514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811142766.8A Pending CN109285133A (en) 2018-09-28 2018-09-28 A spatiotemporal spectral integration fusion method for remote sensing image data with enhanced detail

Country Status (1)

Country Link
CN (1) CN109285133A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751036A (en) * 2019-09-17 2020-02-04 宁波大学 High spectrum/multi-spectrum image fast fusion method based on sub-band and blocking strategy
CN111742329A (en) * 2020-05-15 2020-10-02 安徽中科智能感知产业技术研究院有限责任公司 Mining typical ground object dynamic monitoring method and platform based on multi-source remote sensing data fusion and deep neural network
CN111882511A (en) * 2020-07-09 2020-11-03 广东海洋大学 Multisource remote sensing image fusion method based on integral enhanced gradient iterative neural network
CN112017135A (en) * 2020-07-13 2020-12-01 香港理工大学深圳研究院 Method, system and equipment for spatial-temporal fusion of remote sensing image data
CN112767292A (en) * 2021-01-05 2021-05-07 同济大学 Geographical weighting spatial mixed decomposition method for space-time fusion
CN112906577A (en) * 2021-02-23 2021-06-04 清华大学 Fusion method of multi-source remote sensing image
CN113627357A (en) * 2021-08-13 2021-11-09 哈尔滨工业大学 A high spatial-high spectral resolution remote sensing image eigendecomposition method and system
CN115000107A (en) * 2022-06-02 2022-09-02 广州睿芯微电子有限公司 Multispectral imaging chip, multispectral imaging component, preparation method and mobile terminal
CN115204314A (en) * 2022-08-12 2022-10-18 西南交通大学 Multi-source data fusion method based on vehicle-mounted OBU and vehicle-mounted OBU

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130011078A1 (en) * 2011-02-03 2013-01-10 Massachusetts Institute Of Technology Hyper-Resolution Imaging
CN102915529A (en) * 2012-10-15 2013-02-06 黄波 Integrated fusion technique and system based on remote sensing of time, space, spectrum and angle
CN105809148A (en) * 2016-03-29 2016-07-27 中国科学院遥感与数字地球研究所 Crop drought recognition and risk evaluation method based on remote sensing time-space-spectrum fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130011078A1 (en) * 2011-02-03 2013-01-10 Massachusetts Institute Of Technology Hyper-Resolution Imaging
CN102915529A (en) * 2012-10-15 2013-02-06 黄波 Integrated fusion technique and system based on remote sensing of time, space, spectrum and angle
CN105809148A (en) * 2016-03-29 2016-07-27 中国科学院遥感与数字地球研究所 Crop drought recognition and risk evaluation method based on remote sensing time-space-spectrum fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUANFENG SHEN等: "An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
吴娱: "《数字图像处理》", 31 October 2017, 北京邮电大学出版社 *
孟祥超: "多源时—空—谱光学遥感影像的变分融合方法", 《中国优秀博士学位论文全文数据库基础科学辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751036A (en) * 2019-09-17 2020-02-04 宁波大学 High spectrum/multi-spectrum image fast fusion method based on sub-band and blocking strategy
CN111742329A (en) * 2020-05-15 2020-10-02 安徽中科智能感知产业技术研究院有限责任公司 Mining typical ground object dynamic monitoring method and platform based on multi-source remote sensing data fusion and deep neural network
CN111742329B (en) * 2020-05-15 2023-09-12 安徽中科智能感知科技股份有限公司 Mining typical feature dynamic monitoring method and platform based on multi-source remote sensing data fusion and deep neural network
CN111882511A (en) * 2020-07-09 2020-11-03 广东海洋大学 Multisource remote sensing image fusion method based on integral enhanced gradient iterative neural network
CN112017135B (en) * 2020-07-13 2021-09-21 香港理工大学深圳研究院 Method, system and equipment for spatial-temporal fusion of remote sensing image data
CN112017135A (en) * 2020-07-13 2020-12-01 香港理工大学深圳研究院 Method, system and equipment for spatial-temporal fusion of remote sensing image data
CN112767292A (en) * 2021-01-05 2021-05-07 同济大学 Geographical weighting spatial mixed decomposition method for space-time fusion
CN112767292B (en) * 2021-01-05 2022-09-16 同济大学 A Geographically Weighted Spatial Hybrid Decomposition Method for Spatio-temporal Fusion
CN112906577A (en) * 2021-02-23 2021-06-04 清华大学 Fusion method of multi-source remote sensing image
CN112906577B (en) * 2021-02-23 2024-04-26 清华大学 Fusion method of multisource remote sensing images
CN113627357A (en) * 2021-08-13 2021-11-09 哈尔滨工业大学 A high spatial-high spectral resolution remote sensing image eigendecomposition method and system
CN115000107A (en) * 2022-06-02 2022-09-02 广州睿芯微电子有限公司 Multispectral imaging chip, multispectral imaging component, preparation method and mobile terminal
CN115204314A (en) * 2022-08-12 2022-10-18 西南交通大学 Multi-source data fusion method based on vehicle-mounted OBU and vehicle-mounted OBU
CN115204314B (en) * 2022-08-12 2023-05-30 西南交通大学 Multi-source data fusion method based on vehicle-mounted OBU and vehicle-mounted OBU

Similar Documents

Publication Publication Date Title
CN109285133A (en) A spatiotemporal spectral integration fusion method for remote sensing image data with enhanced detail
CN109859147B (en) A Real Image Denoising Method Based on Noise Modeling in Generative Adversarial Networks
Riegler et al. A deep primal-dual network for guided depth super-resolution
CN110210524B (en) Training method of image enhancement model, image enhancement method and device
WO2018000752A1 (en) Monocular image depth estimation method based on multi-scale cnn and continuous crf
Deng et al. A guided edge-aware smoothing-sharpening filter based on patch interpolation model and generalized gamma distribution
CN102722863A (en) Super-resolution reconstruction method for depth map by adopting autoregressive model
CN106127689B (en) Image and video super-resolution method and device
CN104869387A (en) Method for acquiring binocular image maximum parallax based on optical flow method
CN111986085B (en) Image super-resolution method based on depth feedback attention network system
CN112581378A (en) Image blind deblurring method and device based on significance intensity and gradient prior
CN113673545A (en) Optical flow estimation method, related device, equipment and computer readable storage medium
Wang et al. Multi-scale fusion and decomposition network for single image deraining
CN103559684A (en) Method for restoring images based on smooth correction
Liu et al. Video frame interpolation via optical flow estimation with image inpainting
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
Gao et al. A fast view synthesis implementation method for light field applications
Ye et al. Depth super-resolution via deep controllable slicing network
Kumar et al. Image deconvolution using deep learning-based Adam optimizer
CN116152100A (en) Point cloud denoising method, device and storage medium based on feature analysis and scale selection
CN115511708A (en) Depth map super-resolution method and system based on uncertainty-aware feature transmission
Dong et al. A non-local propagation filtering scheme for edge-preserving in variational optical flow computation
Ranjan et al. Deep learning based image deblurring: A comparative survey
CN112734655B (en) Low-light image enhancement method for enhancing CRM (customer relationship management) based on convolutional neural network image
CN114119667A (en) Correlation Filter Tracking Algorithm Based on Spatio-temporal Regularization and Context Awareness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190129