CN100410684C - Remote sensing image fusion method based on Bayes linear estimation - Google Patents

Remote sensing image fusion method based on Bayes linear estimation Download PDF

Info

Publication number
CN100410684C
CN100410684C CNB2006100241111A CN200610024111A CN100410684C CN 100410684 C CN100410684 C CN 100410684C CN B2006100241111 A CNB2006100241111 A CN B2006100241111A CN 200610024111 A CN200610024111 A CN 200610024111A CN 100410684 C CN100410684 C CN 100410684C
Authority
CN
China
Prior art keywords
image
resolution
bayes
full
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006100241111A
Other languages
Chinese (zh)
Other versions
CN1808181A (en
Inventor
葛志荣
王斌
张立明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CNB2006100241111A priority Critical patent/CN100410684C/en
Publication of CN1808181A publication Critical patent/CN1808181A/en
Application granted granted Critical
Publication of CN100410684C publication Critical patent/CN100410684C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention belongs to the technical field of the remote sensing image processing operation. More specifically, the present invention relates to a remote sensing image merging method based on the statistic parameter estimate. The method introduces an observation module between a high resolution ratio multi-spectral image and a low resolution ratio multi-spectral image, and the observation module between the high resolution ratio multi-spectral image and a full-color image. Then, the two observation modules are established into a Bayesian linear model. The estimate of the high resolution ratio multi-spectral image is obtained on the meaning of the linear minimum mean square error by using the Bayesian Garss-Markov theorem. The present invention not only can enhance the special details, but also can well keep the spectrum characteristics. The performance of the present invention is better than a traditional HIS method, a PCA method, a minimum wave conversion method, existing Nishii and Hardie methods which are based on the estimate of the statistic parameters. The new method provides the novel effective technical support for improving the visual judgment precision of the remote sensing image and enhancing the information definition and reliability.

Description

Remote sensing image fusion method based on Bayes's Linear Estimation
Technical field
The invention belongs to technical field of remote sensing image processing, be specifically related to a kind of remote sensing image fusion method based on Bayes's Linear Estimation.
Background technology
Because the restriction of remote sensor design, remote sensing images are generally traded off between spatial resolution and spectral resolution, and the image with high spectral resolution does not generally possess high spatial resolution, and vice versa.What for example, Landsat ETM+ sensor provided is exactly the multi light spectrum hands image of 6 width of cloth 30m spatial resolutions and the panchromatic wave-band image of a width of cloth 15m spatial resolution.In actual applications, those images that have high spatial resolution and high spectral resolution simultaneously can improve the precision of decipher and classification effectively, therefore the fusion of the remote sensing images of different resolution becomes the multispectral image of focus, especially low resolution of research and the fusion of high-resolution panchromatic wave-band image.In general, the image after the fusion had both required to keep the spectral characteristic of multispectral image, required to incorporate the spatial information of full-colour image again.
Common fusion method has the HIS method at present [1], the PCA method [2,3]And small wave converting method [4,5]HIS method and PCA method usually change the spectral characteristic of former multispectral image significantly, and small wave converting method is relatively more responsive for the selection of decomposing level and wavelet basis, and can be because of operating personnel's difference, and different syncretizing effects is arranged.
In recent years, based on the statistical parameter estimation approach [6-9]Begin to be applied in the fusion of remote sensing images.Nishii etc. [8]Suppose the probability distribution obedience associating Gaussian distribution of high-resolution multi-spectral image and full-colour image, use conditional mean as estimation.Hardie etc. [9]On the basis of above-mentioned associating Gaussian distribution hypothesis, introduce the observation model between high-resolution multi-spectral image and the low resolution multispectral image, obtained the estimation on maximum a posteriori probability (MAP) meaning.Nishii method and Hardie method are difficult to incorporate the spatial information of full-colour image when low resolution multispectral image and full-colour image correlativity are not strong.
At above problem, in the research of remote sensing image fusion, when strengthening spatial detail, keep spectral characteristic well, and guarantee that the robustness of algorithm becomes the focus of present research.
Summary of the invention
The objective of the invention is to propose a kind of remote sensing image fusion method, traditional depend on the problem of multispectral image and full-colour image related coefficient, strengthen spatial detail, keep spectral characteristic based on the statistical parameter method of estimation to solve based on Bayes's Linear Estimation.
The remote sensing image fusion method that the present invention proposes based on Bayes's Linear Estimation, concrete steps are as follows:
Introduce the observation model between high-resolution multi-spectral image and the low resolution multispectral image, and the observation model between high-resolution multi-spectral image and the full-colour image, and above-mentioned two observation model simultaneous are become Bayes's linear model; Use Bayes Gauss-Markov theorem, calculate the estimation of the high-resolution multi-spectral image on linear minimum mean-squared error (LMMSE) meaning.
Below each step is further described in detail.
1, introduces observation model
Suppose that areal is photographed by the multi light spectrum hands sensor of low resolution and high-resolution panchromatic wave-band sensor respectively, so-called image resolution ratio is meant the scope on each pixel covering ground in the image, the height of image resolution ratio is a relative notion, can specifically divide as required.Full-colour image is arranged in the column vector of one dimension in the following manner
x=[x 1,x 2,…,x 1,…,x N] T(1)
X wherein iThe pixel value of expression full-colour image on the i of locus, and N is the number of pixels of full-colour image.The multispectral image of low resolution also is arranged in the column vector of one dimension in a comparable manner
y = [ y 1 T , y 2 T , · · · , y j T , · · · , y M T ] T - - - ( 2 )
Y wherein jThe pixel value of multispectral image on the j of locus of expression low resolution (has K wave band, y j=[y J, 1, y J2..., y JK] T), and M is the number of pixels of low resolution multispectral image.
Suppose that high-resolution multispectral image exists, it should both comprise the spectral information of multispectral image so, had the spatial resolution the same with full-colour image again, represented with following one dimension column vector here
z = [ z 1 T , z 2 T , · · · , z i T , · · · , z N T ] T - - - ( 3 )
Z wherein iRepresent that the pixel value of high-resolution multispectral image on the i of locus (has K wave band, z i=[z I1, z I, 2..., z I, K] T), N is the number of pixels of high-resolution multi-spectral image.
Generally, the multispectral image of low resolution can be thought to be obtained by low-pass filtering and down-sampled process by high-resolution multispectral image (if exist) [10]The present invention introduces the observation model between high-resolution multispectral image and the low resolution multispectral image, specific as follows shown in
y=Hz+u(4)
Wherein u is a random noise, and its average is zero, and covariance matrix is C u, and with z be incoherent; H matrix representation low-pass filtering and down-sampled process.
In addition, we introduce observation model as follows between full-colour image and high-resolution multi-spectral image:
x=G Tz+v(5)
Wherein v is a random noise, and its average is zero, and covariance matrix is C v, and with z be incoherent; The G matrix representation is done weighted mean to K wave band of high-resolution multi-spectral image, and weight factor is as follows
g l = cc l / Σ l = 1 k | cc l | - - - ( 6 )
Cc wherein lThe related coefficient of the l wave band of expression full-colour image and low resolution multispectral image.
Because equation number M * K+N that formula (4) and (5) provide is less than unknown quantity N * K to be estimated (general M<N), therefore can not directly solve z.The present invention will derive to above-mentioned two observation models from the angle of Bayes's linear model, thereby obtain the estimator of high-resolution multi-spectral image z.
When estimating high-resolution multispectral image, observation model (4) can become Bayes's linear model according to following form simultaneous with (5)
y x = H G T z + u v - - - ( 11 )
2, use Bayes Gauss-Markov theorem, carry out Linear Estimation
Tentation data is described by Bayes's linear model
Figure C20061002411100063
Wherein
Figure C20061002411100064
Be L * 1 data vector, A is known L * p observing matrix, and θ is the random vector of p * 1.The reality of θ will estimate that its average is E (θ), and covariance matrix is C θW is the random vector of L * 1, and its average is zero, and covariance matrix is C w, and with θ be incoherent.
At first suppose the estimator of θ
Figure C20061002411100065
Can pass through data set
Figure C20061002411100066
Try to achieve by following formula,
Figure C20061002411100067
Select weighting coefficient a jAnd b iMake Bayes's square error (Bayesian mean square error, BMSE)
BMSE ( θ ^ ) = E [ ( θ - θ ^ ) 2 ] - - - ( 9 )
Minimum, the estimator that obtains are called linear minimum mean-squared error (Linear minimum mean square error, LMMSE) estimator (Bayes's Gauss-Markov theorem [11]), as follows
Figure C20061002411100069
At this moment BMSE ( θ ^ i ) = [ ( C θ + A T C w - 1 A ) - 1 ] ti .
Because generally θ can not ideally be expressed as
Figure C200610024111000611
Linear combination, so the LMMSE estimator is not best, but it is quite useful in practice, because it has separating of closed form, and only relevant with average and covariance.
For Bayes's linear model (11), the estimation that the LMMSE estimator in application (10) formula can obtain high-resolution multispectral image (makes n=[u here, T, v T] T)
z ^ = E ( z ) + C z H G T T H G T C z H G T T + C n - 1 y - E ( y ) x - E ( x ) - - - ( 12 )
Statistical parameter wherein is estimated as follows:
In formula (12), to obtain estimator
Figure C20061002411100072
, need know the average E (z) and the covariance C of high-resolution multi-spectral image zHere suppose that between the pixel of high-resolution multi-spectral image be separate, so the average of entire image and covariance can be made of the average and the covariance of each pixel [12], specific as follows shown in:
E(z)=[E(z 1) T,E(z 2) T,…,E(z i) T,…,E(z N) T](13)
Figure C20061002411100073
Wherein E (z) is to use the multispectral image after the bilinear interpolation (B), is specially:
E(z)=B(y)(15)
In order to estimate C z, we with Vector Quantization algorithm to above-mentioned vector E (z 1) classify according to Euclidean distance, calculate the covariance matrix of each class vector set, and its covariance matrix as each vector correspondence in the class.
In addition, the estimation of the E (y) in the formula (12) is by obtaining interpolation image low-pass filtering and down-sampled (H),
E(y)=H(E(z))=H(B(y))(16)
And the estimation of E (x) is by the panchromatic wave-band image being carried out low-pass filtering and down-sampled, and then bilinear interpolation obtains,
E(x)=B(H(x))(17)
In actual calculation, if the covariance matrix dimension is bigger, the inversion operation in (12) formula will produce difficulty, and therefore, in this case, the present invention becomes many image fritters to estimate image segmentation again.
Above-mentioned Bayes's Linear Estimation method for the wave band number of image before merging without limits, the wave band number of the multispectral image y of low resolution can be greater than 3, and high-resolution full-colour image both can be single band also can be multiwave.Therefore, Bayes's Linear Estimation method both had been applicable to the fusion of multiwave multispectral image and single-range full-colour image, also was applicable to the fusion of multiwave HYPERSPECTRAL IMAGERY and multiwave multispectral image.
Description of drawings
Fig. 1 is the geometric interpretation based on the remote sensing image fusion method of Bayes's Linear Estimation.
Fig. 2 is full-colour image and the Multispectral Image Fusion experimental result of Landsat ETM+.Wherein, Fig. 2 (a) is the multispectral image of 30m spatial resolution, Fig. 2 (b) is the full-colour image of 30m spatial resolution, Fig. 2 (c) is the multispectral image of 120m spatial resolution, and Fig. 2 (d) is the HIS method, and Fig. 2 (e) is the PCA method, Fig. 2 (f) is a small wave converting method, Fig. 2 (g) is the Nishii method, and Fig. 2 (h) is the Hardie method, and Fig. 2 (i) is a bayes method
Fig. 3 is the Multispectral Image Fusion experimental result of full-colour image and the TM of SPOT.Wherein, Fig. 3 (a) is the multispectral image of 30m spatial resolution, Fig. 3 (b) is the full-colour image of 30m spatial resolution, Fig. 3 (c) is the multispectral image of 150m spatial resolution, and Fig. 3 (d) is the HIS method, and Fig. 3 (e) is the PCA method, Fig. 3 (f) is a small wave converting method, Fig. 3 (g) is the Nishii method, and Fig. 3 (h) is the Hardie method, and Fig. 3 (i) is a bayes method
Embodiment
By the following examples, further each composition in the invention is described.
1 is provided with observation model
The multispectral image of low resolution can be thought to be obtained by low-pass filtering and down-sampled process by high-resolution multispectral image (if exist) [10], this observation model is as follows:
y=Hz+u(4)
Wherein u is a random noise, and its average is zero, and covariance matrix is C u, and with z be incoherent; H matrix representation low-pass filtering and down-sampled process.
Between full-colour image and high-resolution multi-spectral image, exist observation model as follows:
x=G Tz+v(5)
Wherein v is a random noise, and its average is zero, and covariance matrix is C v, and with z be incoherent; The G matrix representation is done weighted mean to K wave band of high-resolution multi-spectral image, and weight factor is as follows:
g l = cc l / Σ l = 1 K | cc l | - - - ( 6 )
Cc wherein lThe related coefficient of the l wave band of expression full-colour image and low resolution multispectral image.
2 get in touch into Bayes's linear model with observation model
When estimating high-resolution multispectral image, observation model (4) can become Bayes's linear model according to following form simultaneous with (5)
y x = H G T z + u v - - - ( 11 )
3 use Bayes's Linear Estimation
For Bayes's linear model (11), the estimation that the LMMSE estimator in application (10) formula can obtain high-resolution multispectral image (makes n=[u here, T, v T] T)
z ^ = E ( z ) + C z H G T T H G T C z H G T T + C n - 1 y - E ( y ) x - E ( x ) - - - ( 12 )
4 estimate statistical parameter
Suppose that between the pixel of high-resolution multi-spectral image be separate, so the average of entire image and covariance can be made of the average and the covariance of each pixel [12], as follows
E(z)=[E(z 1) T,E(z 2) T,…,E(z i) T,…,E(z N) T](13)
Figure C20061002411100091
Wherein E (z) is to use the multispectral image after the bilinear interpolation (B),
E(z)=B(y)(15)
In order to estimate C z, we with Vector Quantization algorithm to above-mentioned vector E (z i) classify according to Euclidean distance, calculate the covariance matrix of each class vector set, and its covariance matrix as each vector correspondence in the class.
In addition, the estimation of the E (y) in the formula (12) is by obtaining interpolation image low-pass filtering and down-sampled (H),
E(y)=H(E(z))=H(B(y))(16)
And the estimation of E (x) is by the panchromatic wave-band image being carried out low-pass filtering and down-sampled, and then bilinear interpolation obtains, and is as follows
E(x)=B(H(x))(17)
To said method of the present invention, carried out simulation calculation.Concrete simulated conditions is as follows:
(1) full-colour image of Landsat 7ETM+ and multispectral image;
(2) multispectral image of the full-colour image of SPOT and TM.
What Fig. 2 showed is multispectral image and the full-colour image of Landsat 7ETM+ sensor in the area, Shanghai of shooting on June 14th, 2000.Wherein, full-colour image has the spatial resolution of 15m, and multispectral image has the spatial resolution of 30m, here with the 3rd, 2 and 1 wave band respectively as the RGB passage.
Because Landsat 7 ETM+ do not provide the true multispectral image of 15m resolution to be used for comparison, so we are difficult to estimate the syncretizing effect that the whole bag of tricks obtains.In order to address this problem, we degenerate to 30m and 120m respectively with the spatial resolution of full-colour image and multispectral image.By above-mentioned degraded image is merged, and the multispectral image of fusion results and former 30m resolution is compared.
Fig. 3 has shown at the full-colour image of the SPOT satellite in the Hanoi area of shooting on October 26 nineteen ninety-five and the multispectral image (http://earth.esa.int/mcities/images/cases) of TM.The present invention with the full-colour image of SPOT from the resolution degradation of 10m to 30m, the multispectral image of TM from the resolution degradation of 30m to 150m.Here with the 3rd, 2 and 1 wave band of TM respectively as the RGB passage.
Method proposed by the invention will compare with following method:
(1) traditional HIS method, PCA method;
(2) small wave converting method is selected 4 rank Daubechies wavelet basiss for use, and the level of decomposition is 3 layers.
(3) Nishii method and Hardie method.
Experimental result is as follows:
1, the full-colour image of Landsat 7ETM+ and multispectral image
Analyze from visual effect, changed the spectral characteristic of true multispectral image among Fig. 2 significantly based on the fused images of HIS method and PCA method, its syncretizing effect obviously is inferior to other image interfusion method, therefore in following quantitative comparison, no longer consider HIS method and PCA method based on statistical parameter.
Some spots have appearred in the water body on the right of small wave converting method causes.The fused images of Nishii method and Hardie method is very fuzzy.Here it is to be noted that the related coefficient of low resolution multispectral image and full-colour image is respectively 0.57,0.18 and-0.20.The fused images of bayes method relatively approaches real multispectral image, has not only strengthened spatial detail, and has kept the spectral characteristic of former multispectral image well.
We compare small wave converting method, Nishii method and Hardie method and bayes method quantitatively in the statistical parameter situation of change that strengthens on spatial detail and the maintenance spectral characteristic with the lower part.
In order to weigh the ability that said method strengthens spatial detail, we calculate the standard deviation (SDD) of error image on each wave band that multispectral image that fused images deducts 120m resolution obtains, and are as shown in table 1.Wherein first row have shown the SDD parameter of real multispectral image (30m resolution).In table 1, Nishii method and the Hardie method SDD parameter on each passage all is far smaller than the value of true picture, illustrates that Nishii method and Hardie method can not strengthen spatial detail in this experiment effectively.The SDD parameter of small wave converting method on each passage is identical, and this explanation small wave converting method depends primarily on the decomposition level, and the concrete condition on each passage is not handled targetedly.The SDD parameter of bayes method relatively approaches the value of true picture on each passage, this explanation estimates that by the covariance of high-resolution multi-spectral image it is rational deciding the way of the amplitude of fused images on each passage.
Table 1 the whole bag of tricks strengthens the statistical parameter (Landsat ETM+) of spatial detail
In order to weigh the ability that said method keeps spectral characteristic, we adopt following statistical parameter:
(1) Y-PSNR [9](PSNR) be used for weighing ratio between the error of the peak value of gradation of image and two width of cloth images, be defined as follows:
PSNR=20×log 10(b/rms)(20)
The peak value of b presentation video gray scale wherein, rms is the error mean square root of two width of cloth images, the unit of Y-PSNR is db.General Y-PSNR is big more, and the difference between two width of cloth images is more little.
(2) related coefficient (CC) is defined as follows:
C ( f , g ) = Σ [ ( f ( i , j ) - f ‾ ) × ( g ( i , j ) - g ‾ ) ] Σ [ ( f ( i , j ) - f ‾ ) 2 ] × Σ [ ( g ( i , j ) - g ‾ ) 2 ] - - - ( 21 )
Wherein f (i, j) and g (f and g are the averages of image for i, the j) gray scale of presentation video.General related coefficient is high more, and two width of cloth images are similar more.Y-PSNR and related coefficient are calculated respectively on each wave band of fused images and real multispectral image.
(3) relative global error [5](ERGAS) as follows, the RASE parameter is low more, and the degreeof tortuosity of spectrum is more little.
ERGAS = 100 h l 1 K Σ k = l K rms ( K ) / mz ( K ) - - - ( 22 )
Wherein, l and h are before merging and merge the resolution (getting 120m and 30m here respectively) of back multispectral image, rms (K) expression fused images and the real error mean square root of multispectral image on each wave band, mz (K) is the average of real multispectral image on each wave band.General relative global error is more little, and it is good more to illustrate that the spectral characteristic of fused images keeps.
Above-mentioned statistical parameter is as shown in table 2.Bayes method all has the highest Y-PSNR on each passage, related coefficient with maximum, and global error is minimum relatively, the fused images of bayes method and the difference minimum between the real multispectral image are described, so bayes method is better than small wave converting method, Nishii method and Hardie method in the maintenance of spectral characteristic.
Table 2 the whole bag of tricks keeps the statistical parameter (Landsat ETM+) of spectral characteristic
Figure C20061002411100113
2, the multispectral image of the full-colour image of SPOT and TM
Analyze from visual effect, HIS method among Fig. 3 and PCA method have changed the spectral characteristic of multispectral image significantly, and especially the PCA method is with the color conversion in river.The fused images of Nishii method and Hardie method is fuzzyyer.Here, the related coefficient of low resolution multispectral image and full-colour image is respectively 0.52,0.35 and 0.29.The syncretizing effect of small wave converting method and bayes method compares better.
In table 3, the SDD parameter of Nishii method and Hardie method is all very little, illustrates that they are very limited really for the raising of spatial resolution.Small wave converting method has identical SDD parameter on each passage, still the concrete condition on the different passages is not done to handle targetedly.The value of the SDD parameter of bayes method on each passage and true multispectral image much at one.
In addition, in table 4, bayes method all has the highest Y-PSNR and maximum related coefficient on each passage, and global error is minimum relatively, fused images and the difference between the real multispectral image that bayes method is described are less, therefore, performance that we can say bayes method is better than small wave converting method, Nishii method and Hardie method.
Table 3 the whole bag of tricks strengthens the statistical parameter (SPOT and TM) of spatial detail
Figure C20061002411100121
Table 4 the whole bag of tricks keeps the statistical parameter (SPOT and TM) of spectral characteristic
In sum, bayes method has solved the problem that Nishii method and Hardie method depend on the related coefficient of multispectral image and full-colour image, and experimental result has proved that the fusion performance of bayes method obviously is better than HIS method, PCA method and small wave converting method simultaneously.
At last, it is worthy of note certain methods such as small wave converting method especially, because parameter setting and operating personnel's is different, can cause the fusion results difference, and method proposed by the invention parameter will be provided with automatically, and under the situation that does not need human intervention, still syncretizing effect preferably can be obtained.
List of references
[1]J?W?Carper,T?M?Lillesand,R?W?Kiefer.The?use?of?intensity-hue-saturation?transformationfor?merging?SPOT?panchromatic?and?multispectral?image?data[J].PhotogrammetricEngineering?and?Remote?Sensing,1990,56:459-467.
[2]V?K?Shettigara.A?generalized?component?substitution?technique?for?spatial?enhancement?ofmultispectral?images?using?a?higher?resolution?data?set[J].Photogrammetric?Engineering?andRemote?Sensing,1992,58:561-567.
[3]P?S?Chavez,S?C?Sides,J?A?Anderson.Comparison?of?three?different?methods?to?mergemulti-resolution?and?multispectral?data:Landsat?TM?and?SPOT?panchromatic[J].Photogrammetric?Engineering?and?Remote?Sensing,1991,57(3):295-303.
[4]J?Nufiez,X?Otazu,O?Fors,et?al.Multiresolution-based?image?fusion?with?additive?waveletdecomposition[J].IEEE?Transactions?on?Geoscience?and?Remote?Sensing,1999,37:1204-1211.
[5]M?A?González,J?L?Saleta,R?G?Catalán,et?al.Fusion?of?multispectral?and?panchromaticimages?using?improved?IHS?and?PCA?mergers?based?on?wavelet?decomposition[J].IEEETransactions?on?Geoscience?and?Remote?Sensing,2004,23(18):1291-1299.
[6]J?C?Price.Combining?panchromatic?and?multispectral?imagery?from?dual?resolution?satelliteinstruments[J].Remote?Sensing?of?Environment,1987,21:119-128.
[7]C?K?Munechika,J?S?Warnick,C?Salvaggio,et?al.Resolution?enhancement?of?multispectralimage?data?to?improve?classification?accuracy[J].Photogrammetric?Engineering?and?RemoteSensing,1993,59:67-72.
[8]R?Nishii,S?Kusanobu,S?Tanaka.Enhancement?of?low?resolution?image?based?on?highresolution?bands[J].IEEE?Transactions?on?Geoscience?and?Remote?Sensing,1996,34:1151-1158.
[9]R?C?Hardie,M?T?Eismann,G?L?Wilson.MAP?estimation?for?hyperspectral?image?resolutionenhancement?using?an?auxiliary?sensor.IEEE?Transactions?on?image?processing,2004,13(9):1174-1184.
[10]M?T?Eismann,R?C?Hardie.Application?of?stochastic?mixing?model?to?hyperspectralresolution?enhancement[J].IEEE?Transactions?on?Geoscience?and?Remote?Sensing,2004,42(9):1924-1933.
[11]S?M?Kay,Fundamentals?of?statistical?signal?processing:Estimation?theory[M].EnglewoodsCliffs,NJ:Prentice-Hall,1993:391-392.
[12]H?Eves.Elementary?matrix?theory[M].New?York:Dover,1966:107.
[13]J?Zhou,D?L?Civco,J?A?Silander.A?wavelet?transform?method?to?merge?Landsat?TM?andSPOT?panchromatic?data[J].International?Journal?of?Remote?Sens.,1998,19(4):743-757.

Claims (3)

1. remote sensing image fusion method based on Bayes's Linear Estimation, it is characterized in that introducing the observation model between high-resolution multi-spectral image and the low resolution multispectral image, and the observation model between high-resolution multi-spectral image and the full-colour image, and above-mentioned two observation model simultaneous are become Bayes's linear model; Use Bayes Gauss-Markov theorem, calculate the estimation of the high-resolution multi-spectral image on the linear minimum mean-squared error LMMSE meaning.
2. the remote sensing image fusion method based on Bayes's Linear Estimation according to claim 1 is characterized in that: the observation model between described high-resolution multi-spectral image and the low resolution multispectral image is as follows:
y=Hz+u (4)
Wherein u is a random noise, and its average is zero, and covariance matrix is C u, and with z be incoherent; H matrix representation low-pass filtering and down-sampled process; Y is the column vector that the multispectral image of low resolution is lined up one dimension:
y = [ y 1 T , y 2 T , . . . , y j T , . . . , y M T ] T - - - ( 2 )
Y wherein jThe pixel value of multispectral image on the j of locus of expression low resolution (has K wave band, y j=[y J, 1, y J2..., y JK] T), and M is the number of pixels of low resolution multispectral image; Z is the column vector that high-resolution multispectral image is lined up one dimension:
z = [ z 1 T , z 2 T , . . . , z i T . . . , z N T ] T - - - ( 3 )
Z wherein iRepresent that the pixel value of high-resolution multispectral image on the i of locus (has K wave band, z i=[z I, 1, z I, 2..., z I, K] T), N is the number of pixels of high-resolution multi-spectral image;
Observation model between described high-resolution multi-spectral image and the full-colour image is as follows:
x=G Tz+v (5)
Wherein v is a random noise, and its average is zero, and covariance matrix is C v, and with z be incoherent; The G matrix representation is done weighted mean to K wave band of high-resolution multi-spectral image, and weight factor is as follows
g l = cc l / Σ l = 1 K | cc l | - - - ( 6 )
Cc wherein lThe related coefficient of the l wave band of expression full-colour image and low resolution multispectral image; X is the n dimensional vector n that full-colour image is lined up:
x=[x 1,x 2,…,x i,…,x N] T (1)
X wherein iThe pixel value of expression full-colour image on the i of locus, and N is the number of pixels of full-colour image; Observation model (4) becomes Bayes's linear model with (5) according to following form simultaneous
y x = H G T z + u v - - - ( 11 ) .
3. the remote sensing image fusion method based on Bayes's Linear Estimation according to claim 2 is characterized in that the estimator of high-resolution multi-spectral image is as follows:
z ^ = E ( z ) + C z H G T T ( H G T C z H G T T + C n ) - 1 y - E ( y ) x - E ( x ) - - - ( 12 )
Wherein, E (z) and C zBe respectively the covariance of the average of high-resolution multi-spectral image, they are made up of the average and the covariance of each pixel, specific as follows shown in:
E(z)=[E(z 1) T,E(z 2) T,…,E(z i) T,…,E(z N) T] (13)
Figure C2006100241110003C2
Wherein E (z) is to use the multispectral image after the bilinear interpolation (B), is specially:
E(z)=B(y) (15)
In order to estimate C z, with Vector Quantization algorithm to above-mentioned vector E (z i) classify according to Euclidean distance, calculate the covariance matrix of each class vector set, and its covariance matrix as each vector correspondence in the class;
The estimation of E (y) in the formula (12) is by obtaining interpolation image low-pass filtering and down-sampled (H):
E(y)=H(E(z))=H(B(y)) (16)
And the estimation of E (x) is by full-colour image being carried out low-pass filtering and down-sampled, and then bilinear interpolation obtains:
E(x)=B(H(x)) (17)。
CNB2006100241111A 2006-02-23 2006-02-23 Remote sensing image fusion method based on Bayes linear estimation Expired - Fee Related CN100410684C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100241111A CN100410684C (en) 2006-02-23 2006-02-23 Remote sensing image fusion method based on Bayes linear estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100241111A CN100410684C (en) 2006-02-23 2006-02-23 Remote sensing image fusion method based on Bayes linear estimation

Publications (2)

Publication Number Publication Date
CN1808181A CN1808181A (en) 2006-07-26
CN100410684C true CN100410684C (en) 2008-08-13

Family

ID=36840178

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100241111A Expired - Fee Related CN100410684C (en) 2006-02-23 2006-02-23 Remote sensing image fusion method based on Bayes linear estimation

Country Status (1)

Country Link
CN (1) CN100410684C (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193090A (en) * 2010-03-19 2011-09-21 复旦大学 Mixed pixel decomposition method for remote sensing images
CN101630413B (en) * 2009-08-14 2012-01-25 浙江大学 Multi-robot tracked mobile target algorithm

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030299B (en) * 2007-03-29 2010-05-19 复旦大学 Method for decomposing remote-sensing-mixed image element based on data space orthogonality
CN101221243B (en) * 2007-11-01 2011-12-07 复旦大学 Remote sensing image mixed pixels decomposition method based on nonnegative matrix factorization
CN101916436B (en) * 2010-08-30 2011-11-16 武汉大学 Multi-scale spatial projecting and remote sensing image fusing method
CN101916435B (en) * 2010-08-30 2011-12-28 武汉大学 Method for fusing multi-scale spectrum projection remote sensing images
CN102915529A (en) * 2012-10-15 2013-02-06 黄波 Integrated fusion technique and system based on remote sensing of time, space, spectrum and angle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001084828A1 (en) * 2000-04-27 2001-11-08 Litton Systems, Inc. Method and system for fusing images
WO2003021967A2 (en) * 2001-09-04 2003-03-13 Icerobotics Limited Image fusion systems
US20030053668A1 (en) * 2001-08-22 2003-03-20 Hendrik Ditt Device for processing images, in particular medical images
CN1484039A (en) * 2003-07-24 2004-03-24 上海交通大学 Image merging method based on inseparable wavelet frame
CN1581230A (en) * 2004-05-20 2005-02-16 上海交通大学 Remote-senstive image interfusion method based on image local spectrum characteristic
CN1588447A (en) * 2004-08-19 2005-03-02 复旦大学 Remote sensitive image fusing method based on residual error

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001084828A1 (en) * 2000-04-27 2001-11-08 Litton Systems, Inc. Method and system for fusing images
US20030053668A1 (en) * 2001-08-22 2003-03-20 Hendrik Ditt Device for processing images, in particular medical images
WO2003021967A2 (en) * 2001-09-04 2003-03-13 Icerobotics Limited Image fusion systems
CN1484039A (en) * 2003-07-24 2004-03-24 上海交通大学 Image merging method based on inseparable wavelet frame
CN1581230A (en) * 2004-05-20 2005-02-16 上海交通大学 Remote-senstive image interfusion method based on image local spectrum characteristic
CN1588447A (en) * 2004-08-19 2005-03-02 复旦大学 Remote sensitive image fusing method based on residual error

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
基于贝叶斯数据融合的多尺度目标识别. 汪国有,邹玉兰.华中科技大学学报(自然科学版),第31卷第11期. 2003
基于贝叶斯数据融合的多尺度目标识别. 汪国有,邹玉兰.华中科技大学学报(自然科学版),第31卷第11期. 2003 *
多传感器信息融合基本原理及应用. 马平,吕锋,杜海莲,王瑞,牛成林.控制工程,第13卷第1期. 2006
多传感器信息融合基本原理及应用. 马平,吕锋,杜海莲,王瑞,牛成林.控制工程,第13卷第1期. 2006 *
多源遥感数据融合的现状. 罗忠.测试技术学报,第13卷第1期. 1999
多源遥感数据融合的现状. 罗忠.测试技术学报,第13卷第1期. 1999 *
融合上下文信息的多尺度贝叶斯图像分割. 汪西莉,刘芳,焦李成.计算机学报,第28卷第3期. 2005
融合上下文信息的多尺度贝叶斯图像分割. 汪西莉,刘芳,焦李成.计算机学报,第28卷第3期. 2005 *
贝叶斯融合在SAR图像分类中的应用. 苏芳,洪文,毛士艺.电子学报,第31卷第7期. 2003
贝叶斯融合在SAR图像分类中的应用. 苏芳,洪文,毛士艺.电子学报,第31卷第7期. 2003 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630413B (en) * 2009-08-14 2012-01-25 浙江大学 Multi-robot tracked mobile target algorithm
CN102193090A (en) * 2010-03-19 2011-09-21 复旦大学 Mixed pixel decomposition method for remote sensing images
CN102193090B (en) * 2010-03-19 2013-08-21 复旦大学 Mixed pixel decomposition method for remote sensing images

Also Published As

Publication number Publication date
CN1808181A (en) 2006-07-26

Similar Documents

Publication Publication Date Title
Dong et al. Laplacian pyramid dense network for hyperspectral pansharpening
CN100410684C (en) Remote sensing image fusion method based on Bayes linear estimation
Yang et al. Remote sensing image fusion based on adaptive IHS and multiscale guided filter
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
Liu et al. A variational pan-sharpening method based on spatial fractional-order geometry and spectral–spatial low-rank priors
González-Audícana et al. A low computational-cost method to fuse IKONOS images using the spectral response function of its sensors
Dong et al. Generative dual-adversarial network with spectral fidelity and spatial enhancement for hyperspectral pansharpening
Wang et al. Enhanced deep blind hyperspectral image fusion
CN110969577A (en) Video super-resolution reconstruction method based on deep double attention network
Joshi et al. A model-based approach to multiresolution fusion in remotely sensed images
CN109727207B (en) Hyperspectral image sharpening method based on spectrum prediction residual convolution neural network
CN102982517B (en) Remote-sensing image fusion method based on local correlation of light spectrum and space
Gou et al. Remote sensing image super-resolution reconstruction based on nonlocal pairwise dictionaries and double regularization
Su Highly effective iterative demosaicing using weighted-edge and color-difference interpolations
CN101216557B (en) Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
CN116309136A (en) Remote sensing image cloud zone reconstruction method based on SAR priori knowledge guidance
CN103576164A (en) High-resolution remote sensing image fusion method based on linear Bayesian estimation
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
Dong et al. Fusion of hyperspectral and panchromatic images using generative adversarial network and image segmentation
Lin et al. Polarimetric SAR image super-resolution VIA deep convolutional neural network
Xiao et al. Physics-based GAN with iterative refinement unit for hyperspectral and multispectral image fusion
CN112819697A (en) Remote sensing image space-time fusion method and system
CN100465661C (en) Multispectral and panchromatic image fusion method of supercomplex principal element weighting
Long et al. Dual self-attention Swin transformer for hyperspectral image super-resolution
Bao et al. A blind full resolution assessment method for pansharpened images based on multistream collaborative learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080813

Termination date: 20110223