CN101482929B - Remote-sensing image processing method and system - Google Patents

Remote-sensing image processing method and system Download PDF

Info

Publication number
CN101482929B
CN101482929B CN2009100793750A CN200910079375A CN101482929B CN 101482929 B CN101482929 B CN 101482929B CN 2009100793750 A CN2009100793750 A CN 2009100793750A CN 200910079375 A CN200910079375 A CN 200910079375A CN 101482929 B CN101482929 B CN 101482929B
Authority
CN
China
Prior art keywords
image
pixel
reflectivity
similar
spatial resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100793750A
Other languages
Chinese (zh)
Other versions
CN101482929A (en
Inventor
陈晋
陈仲新
朱孝林
陈学弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Institute of Agricultural Resources and Regional Planning of CAAS
Original Assignee
Beijing Normal University
Institute of Agricultural Resources and Regional Planning of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University, Institute of Agricultural Resources and Regional Planning of CAAS filed Critical Beijing Normal University
Priority to CN2009100793750A priority Critical patent/CN101482929B/en
Publication of CN101482929A publication Critical patent/CN101482929A/en
Application granted granted Critical
Publication of CN101482929B publication Critical patent/CN101482929B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method for processing remote sensing images by which a required image A0 in a moment is obtained according to two groups of images with different space resolutions. The method comprises steps of firstly calculating conversion coefficients of reflectivity variants of a target area between the two groups of images according to the two groups of images with different space resolutions, secondly changing variants in images with low-space resolutions into variants in images with high-space resolutions according to the conversion coefficients, and lastly obtaining reflectivity of target pixels of the required image A0 according to the images with known high-space resolutions. Correspondingly, the invention provides a system for processing remote sensing images. Through the invention, images which can better maintain details can be generated.

Description

The disposal route of remote sensing image and system
Technical field
The present invention relates to the technical field that remote sensing image is handled, be meant the disposal route and the system that generate new remote sensing image more than a width of cloth remote sensing image by using especially.
Background technology
The technology of the different advantages of integrated application multisensor remotely-sensed data on space, time, spectral resolution, be that the RS data integration technology developed since the eighties in 20th century, purpose is by the processing to the multiple sensors data, improves the space and the spectral resolution of remotely-sensed data.Comparatively ripe at present fusion method has: IHS conversion, principal component transform, wavelet transformation and Brovey conversion, high-pass filtering etc., but these methods are mainly used in the high spatial resolution panchromatic wave-band image that merges same or the close moment and than the multi light spectrum hands image of low spatial resolution, its result images is the high spatial resolution multi-spectral image of synchronization.Because the image of high spatial resolution can not obtain simultaneously with all low spatial resolution images, these traditional multi-source data fusion methods can not improve the room and time resolution of remotely-sensed data simultaneously.That is, can't generate the reflectivity data of high-spatial and temporal resolution.For this reason, people such as Gao proposed the STARFM method in 2006, and this method can access high-spatial and temporal resolution reflectivity image comparatively accurately, but the maintenance of the details of the image of its generation is bad.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide a kind of disposal route of remote sensing image, to realize and can generate the better maintain details image.
For solving the problems of the technologies described above, the invention provides a kind of disposal route of remote sensing image, a pixel of setting certain image A0 that asks constantly is the target pixel, the ground table section of determining target pixel correspondence is the target area, obtain first group image with high spatial resolution and second group image of same ground table section with low spatial resolution, the images in corresponding at least two pairs of different moment of selection from described two group images, the reflectivity that calculates each target pixel according to the following steps respectively to be generating A0 image: a, calculates conversion coefficient between the reflectance varies amount of described target area in two group images according to selected image; B selects from second group image and described image A0 moment corresponding image B0 and at least one other image constantly; Calculate the variable quantity of the reflectivity of described target area in each described other image constantly according to selected image with respect to its reflectivity in image B0; C is converted to each reflectance varies amount in first group image according to described conversion coefficient with the variable quantity of described each reflectivity; And d, from first group image, select and described other moment corresponding image, calculate the reflectivity of described target area according to each reflectance varies amount and the target area reflectivity in selected image that is converted to.
Just can obtain the relation between the change information of regional reflex rate in the face of land in the two constantly corresponding group images by at least two pairs of corresponding images of the moment, i.e. conversion coefficient.Obtain the conversion coefficient of target area via step a according to the image of being chosen.Because the pass between the change information tie up to can think in the short period scope stable.So, then can via step c the reflectance varies amount (be called for short second variable quantity) of target area in second group image of step b gained be converted to the reflectance varies amount (be called for short first variable quantity) of target area in first group image according to conversion coefficient as long as the moment and the image A0 of selected each image is more or less the same constantly.Thus, be based on by the reflectivity of steps d gained that reflectivity in the first meticulousr group image and the variable quantity after the conversion generate.The A0 image of Sheng Chenging has just well kept the details in first group image like this, and the image of avoiding being generated visually has smoothed effect.
Preferably, described step a comprises: a1 is set at the center pixel with the pixel corresponding with described target area in selected first group image; A2 filters out at least one separately the image of place respectively from described center pixel and belongs to the similar pixel of similar atural object with described center pixel; And a3, determine that each pairing ground of similar pixel table section is a similar area, calculate described conversion coefficient according to the reflectivity of described similar area in selected image.
The variable quantity of mixed pixel reflectivity is the concentrated expression of the variable quantity of each end member reflectivity.So end member can provide than mixed pixel change information more accurately.Belong to similar atural object because of similar pixel with the center pixel again,, can obtain conversion coefficient more accurately thus so similar pixel can provide change information the most accurately as end member.Wherein, the center pixel is one of similar pixel at last also, and when failing to filter out other similar pixel, the center pixel is unique similar pixel.Especially for the ground table section that has broken patch and little atural object in the target area, its image is mixed pixel in second group image.The conversion coefficient conversion that its change information that provides (second variable quantity) is obtained by each similar pixel is modified to the change information (first variable quantity) that belongs to the end member of similar atural object with the target pixel.Thus, the reflectivity accuracy of target pixel is higher, especially broken patch and little atural object is had better syncretizing effect.
Preferably, also comprise before the described step a3: a21, calculate the weight of each similar pixel according to selected image; Described step b comprises: b1, calculate the reflectance varies amount of each described similar area in second group image according to B0 image and other described second group images constantly; B2, the reflectivity of each described target area of reflectance varies amount weighted average calculation in each described other image constantly that step b1 is obtained according to the described weight that obtains is with respect to the variable quantity of its reflectivity in image B0.
Second variable quantity of target area is not directly to ask difference to obtain by the target area at the reflectivity of B0 image and described second group other images constantly, but is provided jointly according to weight separately by each similar pixel.Revised the change information that mixed pixel provided of target area correspondence in other images constantly of B0 image and second group thus.Thereby obtain second variable quantity more accurately.
Preferably, the calculating of conversion coefficient described in the described step a3 is to be independent variable with the reflectivity of each described similar area in selected image with the variable quantity of the reflectivity in second group image, and the variable quantity of the reflectivity in first group image is that dependent variable is carried out regretional analysis.
Utilize regretional analysis can eliminate the error that causes because of reasons such as noises, thereby obtain more sane conversion coefficient, reduce possible error.
Preferably, also comprise before the described step a3: a22, calculate the weight of each similar pixel according to selected image; Described regretional analysis is the weighted regression analysis that carries out according to the weight of each similar pixel.
Each similar pixel distance is the space length of imago unit wherein, with and the spectral signature and the reasons such as consistance of the spectral signature of center pixel when all having caused each similar pixel that change information is provided, have the difference of contribution.When regretional analysis, utilize the weight of each similar pixel to be weighted regretional analysis, can embody these differences.And then obtain conversion coefficient more accurately, with the reflectivity accuracy of further raising target pixel.
Preferably, the calculating of the weight of described each similar pixel may further comprise the steps: a221, calculate the consistent degree of all band spectrum vectors of pixel of each described similar area in all band spectrum vector sum second group images of each similar pixel according to selected image; A222 calculates the weight of each similar pixel according to described consistent degree.
The pixel of described first group image is the high spatial resolution pixel, the pixel of described second group image is the low spatial resolution pixel, the high spatial resolution pixel accordingly under the table section atural object the low spatial resolution pixel of correspondence accordingly in the various atural objects of table section proportion be the purity of high spatial resolution pixel.So the purity of similar pixel is big more, just mean that also the change information that it provided is accurate more.Thereby can improve the weight of the big similar pixel of purity that more accurate change information can be provided by step a221 and step a222.The calculating of purity size then is that the consistent degree by all band spectrum vectors calculates.
Preferably, described step a2 also comprises: pairing ground of the similar pixel table section that will respectively filter out is got common factor, is similar pixel with the face of land area relative pixel of getting behind the common factor.
The atural object that has can change in time, and its spectral signature is also corresponding to change, and therefore the similar pixel that only utilizes one group of moment corresponding image to filter out may filter out the similar pixel that does not belong to same atural object with the center pixel.So get can filter out behind the common factor each constantly all be the similar pixel of same atural object with the center pixel.Thereby obtain more reliable target pixel.
The present invention also provides a kind of disposal system of remote sensing image accordingly, try to achieve the A0 image of high spatial resolution according to described two group images, described A0 image does not belong to first group image, and form by at least one pixel to be calculated, described disposal system comprises: image is selected module 101, second group image that is used to obtain first group image with high spatial resolution of same ground table section and has low spatial resolution, the images in corresponding at least two pairs of different moment of selection from described two group images; Target pixel setting module 102, a pixel that is used to set certain image A0 that asks constantly is the target pixel, the ground table section of determining target pixel correspondence is the target area; Target pixel reflectivity calculates module 104, is used to calculate the reflectivity of target pixel; With image generation module 108, the reflectivity that is used for calculating according to target pixel reflectivity each target pixel that module 104 calculates generates the A0 image; Described target pixel reflectivity calculates module 104 and comprises following submodule: conversion coefficient module 103, calculate conversion coefficient between the reflectance varies amount of described target area in two group images according to selected image; The second variable quantity computing module 105, be used for selecting and described image A0 moment corresponding image B0 and at least one other image constantly, calculate the variable quantity of the reflectivity of described target area in each described other image constantly with respect to its reflectivity in image B0 according to selected image from second group image; Variable quantity modular converter 106 is used for being converted to each reflectance varies amount at first group image according to each variable quantity that described conversion coefficient obtains described each second variable quantity computing module 105; Calculate module 107 with reflectivity, be used for selecting and described other moment corresponding image, calculate the reflectivity of described target area according to each reflectance varies amount and the target area reflectivity in selected image that is converted to from first group image.
Preferably, described conversion coefficient module 103 comprises: center pixel setting module 1031 is used for the pixel corresponding with described target area of selected first group image is set at the center pixel; Similar pixel screening module 1032, be used for from described center pixel separately the place image filter out at least one respectively and belong to the similar pixel of similar atural object with described center pixel; With conversion coefficient computing module 1033, be used for determining that each pairing ground of similar pixel table section is a similar area, calculate described conversion coefficient according to the reflectivity of described similar area in selected image.
Preferably, described target pixel reflectivity calculates module 104 and also comprises: weight module is used for calculating according to selected image the weight of each similar pixel; The described second variable quantity computing module 105 comprises: the similar pixel second variable quantity computing module 1051, be used for selecting and described certain moment corresponding image B0 and at least one other image constantly, calculate the reflectance varies amount of each described similar area in second group image according to selected image from second group image; The second variable quantity weighted mean module 1052, the reflectivity of each described target area of reflectance varies amount weighted average calculation in each described other image constantly that the similar pixel second variable quantity computing module 1051 is obtained according to the described weight that obtains is with respect to the variable quantity of its reflectivity in image B0.
Description of drawings
Fig. 1 is the embodiment process flow diagram of the disposal route of remote sensing image;
Fig. 2 is the process flow diagram of the similar pixel of screening;
Fig. 3 is for getting the similar pixel synoptic diagram of screening that occurs simultaneously;
Fig. 4 is the process flow diagram that calculates the weight of similar pixel in the present embodiment;
Fig. 5 is the process flow diagram that obtains conversion coefficient;
Fig. 6 is the calculating synoptic diagram of conversion coefficient;
Fig. 7 is for calculating the process flow diagram of the high spatial reflectivity of predicting target pixel constantly;
Fig. 8 is the structural drawing of disposal system;
Fig. 9 is conversion coefficient module 103 structural drawing;
Figure 10 is the second variable quantity computing module, 105 structural drawing;
Figure 11 is weight module 109 structural drawing among the embodiment;
Figure 12 is weight module 109 structural drawing among another embodiment;
Figure 13 is similar pixel screening module 1032 structural drawing;
Figure 14 is adjusting module 1034 structural drawing
The situation that Figure 15 changes for simulation phenology;
Figure 16 is the situation of simulation linear ground object;
Figure 17 is MDCM and STARFM predicting the outcome to the reflectivity of linear ground object among Figure 16 (b);
Figure 18 is the situation of simulation small size atural object;
Figure 19 is MDCM and the STARFM predicated error to each circular little atural object among Figure 18 (b);
Figure 20 goes up row for MODIS and Landsat arranges false colour composite image down;
Figure 21 be generate by STARFM and MDCM method respectively July Landsat resolution on the 11st false colour composite image;
Figure 22 is each wave band scatter diagram of real image and reconstructed image.
Embodiment
Before embodiment is described, earlier theoretical foundation of the present invention is described.
After remotely-sensed data process radiation calibration, geometrical registration and the atmosphere correction of the same area, have certain comparability and correlativity between these data from different sensors.But,, make to have certain system deviation between these multi-source datas because different sensors there are differences at waveband width, spectral response functions, the aspects such as atmospheric condition that obtain constantly.When systematic error is corrected, how to utilize the correlativity between them to realize that the fusion of multi-source data has constituted core of the present invention.Below introduce theoretical foundation of the present invention in two kinds of situation.
For the condition of expressing one's feelings equably: suppose that low spatial resolution, high time resolution (slightly being written as low spatial resolution) and high spatial resolution, low temporal resolution (slightly being written as high spatial resolution) sensor have similar spectral band setting.When the low spatial resolution image resamples to spatial resolution (be identical pixel size) identical with high spatial resolution image and coordinate system, for the atural object of same and homogeneous, only the pure pixel of a kind of type of ground objects in the low spatial resolution pixel, this moment, the relation between high spatial resolution reflectivity on the wave band B and low spatial resolution reflectivity was height, relative radiometric calibration relation on the low spatial resolution image between the reflectivity, this relation is only determined by two kinds of sensor characteristic differences (waveband width and spectral response functions) and imaging moment atmospheric condition difference, generally be can be considered following linear relationship:
F(x i,y j,t k,B)=a×C(x i,y j,t k,B)+b 1
Wherein, F, C represent the pixel reflectivity of high and low spatial resolution, x respectively I,, y j) be the pixel position, B is a spectral band, t kRepresented image to obtain (is unit with the day) constantly, a, b represent the gain and the deviation value of high spatial resolution and low spatial resolution image relative radiometric calibration respectively.For the atural object of same and homogeneous, relation shown in 1 formula is obtained at any image and is constantly all set up.
If obtained t 0The image and the t of the high and low spatial resolution of moment wave band B pThe low spatial resolution image of same wave band of the moment, t so pThe pixel reflectivity of this wave band high spatial resolution can be represented by formula 2 constantly: F (x i, y j, t p, B)=a * C (x i, y j, t p, B)+b 2
And t 0The pixel reflectivity of high and low spatial resolution image closes and is constantly:
F(x i,y j,t 0,B)=a×C(x i,y j,t 0,B)+b 3
By formula 2 and formula 3 subtract each other and transplant the arrangement can obtain formula 4:
F(x i,y j,t p,B)=F(x i,y j,t 0,B)+a×(C(x i,y j,t p,B)-C(x i,y j,t 0,B))4
Following formula can be understood as t pThe pixel reflectivity of high spatial resolution is by t constantly 0The reflectivity of high spatial resolution adds t constantly pRelative t 0Reflectance varies amount constantly.A represents the gain of high spatial resolution and low spatial resolution image relative radiometric calibration in the formula, also can be regarded as the conversion coefficient between high spatial resolution and the low spatial resolution pixel reflectivity, this coefficient is determined by two kinds of sensor characteristic differences and imaging moment atmospheric condition difference.If two kinds of sensor imagings are constantly close, it is identical to can be considered atmospheric condition, and then this coefficient is only determined by two kinds of sensor characteristic differences, and does not change in time.
If obtain any two moment t m, t nThe high and low spatial resolution image of two couples, by the pure pixel reflectivity of each wave band on the high and low spatial resolution image in these two moment is carried out the conversion coefficient a that linear regression can be obtained each wave band, can generate the reflectivity image of continuous high spatial resolution of a series of times by this coefficient and formula 4 so by continuous low spatial resolution image of a series of times.
For the non-condition of expressing one's feelings equably: because the restriction of sensor spatial resolution and the complicated diversity of atural object, mixed pixel is prevalent on the low spatial resolution image, various to face of land type of ground objects, distribute mixed and disorderly regional especially true.At this moment, the reflectivity of the high spatial resolution pixel that the low spatial resolution pixel is corresponding with it is not to the reaction with a kind of atural object, difference between the two is not only the difference of radiation calibration, and this moment is based on the formula 4 of the pure pixel in the homogeneous face of land and be not suitable for the situation of non-homogeneous face of land mixed pixel.
The spectral signature of mixed pixel is the concentrated expression of its inner all kinds of object spectrum end member spectrum, so two moment t m, t nBetween the concentrated expression that is changed to all kinds of end member spectrum change of its reflectivity.What at present mixed pixel spectrum mixture model was the most frequently used is the line spectrum mixture model, promptly utilizes a linear relationship to express relation between mixed pixel and each the end member spectrum.Suppose t on the low spatial resolution image m, t nCertain wave band mixed pixel reflectivity constantly is respectively Y m, Y n, according to line spectrum mixture model, Y m, Y nCan express by following formula:
Y m = Σ i = 1 M f i X im + ϵ
Y n = Σ i = 1 M f i X in + ϵ - - - 5
Wherein, the end member number that M comprises for this mixed pixel, X ImAnd X InRepresent t respectively m, t nThe spectrum of moment i kind end member, ε is an error term, and considers that ε is at t m, t nConstantly constant.Can obtain variable quantity Y by formula 5 n-Y m: wherein the fi in the following formula is the ratio of i end member;
Y n - Y m = Σ i = 1 M f i ( X in - X im ) - - - 6
Suppose t m, t nTime interval within the specific limits the time, the variation of each end member reflectivity can be approximated to be linear change, then end member spectrum X InCan be represented by formula 7: wherein the ri in the following formula is the linear change rate of i end member reflectivity; X In=r i* Δ t+X Im7
Wherein, Δ t=t n-t m, formula 7 substitution formulas 6 can be obtained following formula: Y n - Y m = Δt Σ i = 1 M f i r i - - - 8 Known k end member t m, t nReflectivity constantly, then can obtain Δ t by formula 7: Δt = X kn - X km r k - - - 9 With formula 9 substitution formulas 8, can get: X kn - X km Y n - Y m = r k Σ i = 1 M f i r i - - - 10
Because hypothesis t m, t nThe time interval within the specific limits, the linear change rate r of each end member ratio f and each end member reflectivity is at t m, t nCan be considered stable constantly, then the right of formula 10 is one stablize variable, represents that with v its meaning is the ratio variation scale-up factor of k class atural object end member reflectance varies amount and mixed pixel reflectance varies amount.
For the non-homogeneous face of land, the low spatial resolution pixel is a mixed pixel, and each pixel on its corresponding high spatial resolution image is considered as end member.According to formula 10, t m, t nHigh spatial resolution pixel reflectivity F (x constantly i, y i, t m, B) and F (x i, y i, t n, B) with corresponding low spatial resolution pixel reflectivity C (x i, y i, t m, B) and C (x i, y i, t n, B) satisfy following relation:
F ( x i , y i , t n , B ) - F ( x i , y i , t m , B ) C ( x i , y i , t n , B ) - C ( x i , y i , t m , B ) = v - - - 11
Through type 11 as can be known, t m, t nEach wave band high spatial resolution reflectivity in the moment is a linear relationship with corresponding low spatial resolution reflectivity, carries out the variation scale-up factor v that linear regression can be obtained each wave band.
If at t mTo t nIn, obtained t 0The image and the t of the high and low spatial resolution of moment wave band B pThe low spatial resolution image of same wave band of the moment.According to above hypothesis, t pCompare t 0Constantly the scale-up factor between the reflectance varies amount of the pixel of this wave band high spatial resolution and low spatial resolution pixel still be v then: F ( x i , y i , t p , B ) - F ( x i , y i , t 0 , B ) C ( x i , y i , t p , B ) - C ( x i , y i , t 0 , B ) = v - - - 12
Through type 12 can solve t pHigh spatial resolution reflectivity constantly:
F(x i,y j,t p,B)=F(x i,y j,t 0,B)+v×(C(x i,y j,t p,B)-C(x i,y j,t 0,B))13
Though formula 13 is identical with the form of formula 4, its meaning is different with usable condition.What formula 4 was represented is the relation of relative calibration between the pure pixel different resolution reflectivity, all sets up at any time, and its result calculated is the most accurate.And formula 13 expression is the proportionate relationship between the different resolution reflectance varies amount in the mixed pixel, according to hypothesis only at t mAnd t nBetween or constantly just set up near it, and be based on the linear hypothesis that is changed to of each atural object reflectivity in the short time range, so its result calculated is an approximate solution.
Based on above-mentioned theoretical foundation, when the low spatial resolution pixel is pure pixel, can utilize formula 4 to solve t pHigh spatial resolution reflectivity constantly, and utilize formula 13 to find the solution during for mixed pixel when the low spatial resolution pixel.But pure pixel result calculated than mixed pixel accurately and reliably.For utilizing the information of pure pixel as far as possible, and reduce image and be subjected to the uncertainty that pollutions such as cloud, atmosphere bring to result of calculation, take the way of finding the solution in the window in the present embodiment.That is, with pixel x W/2,, y W/2) be the center, get the window that width is w, filter out the similar pixel of pixel that belongs to similar atural object with the center pixel earlier, utilize the formula 14 pixel t of computing center then pHigh spatial resolution reflectivity constantly:
F ( x w / 2 , y w / 2 , t p , B ) = F ( x w / 2 , y w / 2 , t 0 , B ) + Σ i = 1 N W i × V × ( C ( x i , y i , t p , B ) - C ( x i , y i , t 0 , B ) ) - - - 14
Wherein, N is the similar pixel number that belongs to similar atural object in the window with the center pixel, and W is the weight of each similar pixel, and V is the conversion coefficient of high and low spatial resolution.The meaning of formula 14 is t pReflectivity constantly adds t by the reflectivity in the known moment pKnown relatively t of the moment 0Variable quantity decision constantly, and this variable quantity is predicted jointly by the similar pixel in the window.
Before present embodiment, need usually earlier the high and low spatial resolution image that obtains to be carried out pre-service, make to have similar band setting between the multi-source data, identical pixel size and map format and identical coordinate system.Adopt software and method commonly used to realize.Hereinafter will be in conjunction with the accompanying drawings the embodiment of the disposal route of remote sensing image be elaborated.
Two group images of remote sensing image for the shooting of the same face of land is obtained in the present embodiment, wherein, first group image has high spatial resolution, and second group image has low spatial resolution, tries to achieve t according to described two group images pThe A image of high spatial resolution constantly, described A image is made up of at least one pixel to be calculated.As shown in Figure 1, the disposal route of remote sensing image may further comprise the steps: step 100, from described remote sensing image, select at least two pairs of corresponding image constantly.In the present embodiment, that select for use is known t m, t nTwo pairs of images constantly.From second group image, select its moment and t pCorresponding image of the moment; Step 101, setting a pixel to be calculated is the target pixel, determines that the ground table section of answering of target pixel is the target area; Step 102, pixel with the target area correspondence is the center pixel, the similar atural object pixel that filters out in the window of center pixel described in the high spatial resolution image in the known moment obtains similar pixel, wherein, the center pixel is one of similar pixel at last also, when failing to filter out other similar pixel, the center pixel is unique similar pixel; Step 104 is calculated the weight of described similar pixel.In the present embodiment, according to t m, t nTwo width of cloth low spatial resolution images constantly calculate the weight of the similar pixel that filters out in the step 102; Step 106 adopts weighting algorithm that the high and low spatial resolution reflectivity in the known moment is carried out linear regression according to described weight and calculates conversion coefficient; Step 108, the high spatial resolution reflectivity of calculating prediction target pixel constantly is in the present embodiment, according to conversion coefficient and prediction moment t pThe low spatial resolution image calculate the time weighting of known moment high spatial resolution image, and obtain predicting the high spatial resolution reflectivity of target pixel constantly according to described time weighting weighted calculation according to known moment high spatial resolution image; Step 110 is calculated all target pixels and the final A of generation image one by one.Hereinafter will be elaborated to above-mentioned each step in conjunction with the accompanying drawings.
Fig. 2 is the process flow diagram of the similar pixel of screening in the present embodiment, comprises as shown in the figure:
Step 1022 is the center pixel with the pixel of target area correspondence, and the pixel with the target pixel belongs to similar atural object that filters out in the window of center pixel described in the high spatial resolution image in each known moment obtains similar pixel.At target pixel (x W/2,, y W/2) be in the window at center, the high spatial resolution pixel that belongs to similar atural object with the center pixel that will calculate just can provide comparison correct reflectance varies information reliably.Can use following two kinds of methods to filter out similar pixel: (a) high spatial resolution image is carried out unsupervised classification, the pixel identical with center pixel type is similar pixel; (b) use threshold decision, judge whether to belong to similar atural object with the difference of center pixel reflectivity with other pixel reflectivity in the window.The methods of these the two kinds similar pixels of screening have common meaning, promptly are to be same class atural object with the similar pixel of spectrum.But difference is arranged again: threshold method is to seek the pixel similar to the spectrum of center pixel in window, and the center pixel is as the center of searching, and its screening conditions are local suitable, becomes along with the change in location of center pixel; And classification is to carry out at total image, if classification has error, the similar pixel of judging so may have very big mistake.So in the present embodiment, use threshold method to screen similar pixel.In the present embodiment, judge that interior all wave bands of certain pixel of window all satisfy the condition of formula 15, then this pixel is confirmed as the similar pixel of center pixel.|F(x i,y i,t k,B)-F(x w/2,y w/2,t k,B)|≤σ(B)×2/m 15
Wherein, σ (B) is t kThe standard deviation of the moment all pixel B wave bands of high spatial resolution image, m is the atural object classification number of estimating, the zone that covers such as image mainly contains vegetation, bare area, water body, rock 4 class atural objects, then the m value can be made as 4.Certainly, also can screen or employing method (a) is screened with the subband in all wave bands.
Step 1024 is got common factor with the similar pixel in two groups of known moment filtering out.The atural object that has can change in time, and its spectral signature is also corresponding to change, and mistake may appear in the similar pixel that the image of phase filters out when only utilizing.Such as two kinds of types of ground objects of bare area and crops are arranged in the window, if the center pixel is crops, t mCrops do not grow constantly, and its spectral signature is identical with bare area, and the similar pixel that screen this moment is a bare area, if t nCrops grow constantly, then at t nCan filter out correct crops pixel on the image constantly.Given this, utilize t in the present embodiment respectively m, t nTwo panel height spatial resolution images screen similar pixel constantly, and The selection result is got common factor, improve the accuracy of The selection result with this.Be illustrated in figure 3 as and get the similar pixel synoptic diagram of screening that occurs simultaneously, " * " expression center pixel, the similar pixel of " zero " expression center pixel.
Fig. 4 is a flow process of calculating similar pixel weight in the present embodiment, comprises as shown in the figure:
Step 10402 is calculated the purity size of similar pixel.Each similar pixel that weights W has determined to filter out in the window is to calculating the contribution of target pixel reflectivity.Weights W is by the purity size of each similar pixel and the far and near common decision of space length of decentering pixel in the present embodiment.The similar pixel weight that purity is high more, space length is near more is big more, and this is because the pure pixel on the homogeneous face of land can provide change information the most accurately, and thinks that the situation of change of the similar pixel that decentering pixel on the space is near more is consistent more with the center pixel.
Certain pixel on the high spatial resolution image has been represented certain type of ground objects, and the pixel of its correspondence position low spatial resolution image has comprised such atural object, and purity promptly is meant this kind atural object proportion in the low spatial resolution pixel.If the low spatial resolution pixel is covered by this kind atural object entirely under the condition of expressing one's feelings equably, then think pure pixel.Zone such as 30m * 30m that certain pixel covered on the Landsat image is a wheat, and the zone of 500m * 500m that pixel covered also is wheat entirely on the MODIS image of its correspondence position, thinks that then this pixel is the pure pixel of wheat.Because different atural object has different spectral signatures, so can judge the purity size by consistent degree than the curve of spectrum of the low spatial resolution pixel of higher spatial resolution pixel and its correspondence position.If the spectrum consistent degree of high spatial resolution pixel and its corresponding low spatial resolution pixel is high more, think that then purity is big more.And the consistance between the curve of spectrum of high and low spatial resolution pixel can be portrayed by related coefficient, so the purity R of the similar pixel of each that filters out can be calculated by formula 16: R i = cov ( F i , C i ) D ( F i ) · D ( C i ) - - - 16
F i={F(x i,y i,t m,B 1),…,F(x i,y i,t m,B n),F(x i,y i,t n,B 1),…,F(x i,y i,t n,B n)}
C i={C(x i,y i,t m,B 1),…,C(x i,y i,t m,B n),C(x i,y i,t n,B 1),…,C(x i,y i,t n,B n)}
Wherein vectorial F i, C iRepresent the spectrum vector of i the high and low spatial resolution of similar pixel respectively, cov is for asking covariance, and D is for asking variance.The scope of R value is-1 to 1, and the big more expression purity of R value is high more.Why with t m, t nThe calculating related coefficient of putting together of the curve of spectrum constantly is to consider the variation of atural object with phenology, and spectral signature also can change, and mistake may appear in the spectrum of phase when only using one.Consistent such as wheat its spectral signature before turning green with bare area, if the low spatial resolution pixel of certain wheat high spatial resolution pixel correspondence also comprises bare area except wheat, the image calculating of obtaining before then turning green with wheat can obtain higher purity, obviously is incorrect.When the image that obtains is judged, will obtain more correct purity after the adding wheat is turned green.As seen from the above, as long as all band spectrum vector sums of the high spatial resolution pixel in the acquisition known moment of at least two groups just can calculate the purity size according to its consistent degree at all band spectrum vectors of its correspondence position low spatial resolution pixel simultaneously.
Utilize the conduct of purity size to calculate a key element of weight, the result that more effectively utilize similar pixel and obtain is also more accurate than the spectral characteristic that the present employing difference of high and low spatial resolution reflectivity of the moment is together represented as the key element of calculating weight.
Step 10404, whether in similar pixel have pure pixel, have then to enter step 10406 if judging, then do not enter step 10408.
Step 10406, the weight of setting pure pixel is 1.In the present embodiment, the span of weights W is 0 to 1, and the weight sum of all similar pixels is 1.The pixel of R=1 is pure pixel.In order to utilize pure pixel as far as possible, the weight of pure pixel is made as 1.That is, the reflectance varies information of center pixel is provided by these pure pixels fully.In another embodiment, when having P pure pixel existence to be the pixel of R=1 in the similar pixel, the weights W of stipulating these pure pixels is 1/P, and the weight of other similar pixels is 0.Certainly, the judgement of pure pixel also can be to realize in other step in calculating the weight process.For example before step 10402 or after step 10408.Do not judged whether that even pure pixel is also passable.
Step 10408 is calculated the space length of similar pixel distance center pixel.Similar pixel x I,, y i) and window center pixel x W/2,, y W/2) space length d iCalculate by formula 17:
d i = 1 + ( x w / 2 - x i ) 2 + ( y w / 2 - y i ) 2 / ( w / 2 ) - - - 17
Following formula has comprised the normalization to space length, the space length d of each similar pixel in window w iSpan be 1 to 1+20 0.5, be worth greatly more, this similar pixel decentering pixel is far away more.
Step 10410 is calculated the weight of similar pixel.Next purity and these two incoherent amounts of space length are linked together, determine the weights W of each similar pixel.Above mentioned purity high more, the distance near more pixel information more accurately can be provided, promptly R is big more, d is more little, its weights W is big more.In the present embodiment,, purity and space length are combined into total distance D: D by formula 18 i=(1-R i) * d i18
The similar pixel that D is big more is that the calculating contribution of target pixel reflectivity is more little, the normalization reciprocal of D is obtained the weights W formula 19 of each similar pixel again: W i = ( 1 / D i ) / Σ i = 1 N ( 1 / D i ) - - - 19
Certainly, the relation of setting up purity, space and weight can adopt various ways, as long as can realize that purity is big more, apart from big more the getting final product of weight of the near more similar pixel of center pixel.
Fig. 5 is the flow process that obtains conversion coefficient, comprises as shown in the figure:
Step 1062 is calculated conversion coefficient.By formula 4 and formula 13 as can be known, conversion coefficient V has comprised the gain a of relative calibration and the scale-up factor v of high low spatial resolution variable quantity, and the two all can be by the t that has obtained m, t nThe moment each wave band reflectivity of high and low spatial resolution carries out linear regression and obtains.In order to reduce noise pollution etc. conversion coefficient is calculated the uncertainty of bringing, with all similar pixel t m, t nHigh and low spatial resolution reflectivity carries out linear regression constantly, and the slope of regression straight line promptly is conversion coefficient V.In order to utilize the information calculations conversion coefficient of more reliable similar pixel, adopt weighted least-squares method to carry out linear regression, weight is the W of each similar pixel.Be illustrated in figure 6 as the calculating synoptic diagram of conversion coefficient.Have 12 pixels and center pixel to belong to similar atural object in certain window and comprise center pixel itself, frame of broken lines marks out t respectively m, t nReflectivity constantly is as seen from t mTo t nConstantly, high spatial resolution reflectivity and its corresponding low spatial resolution reflectivity all increase to some extent, but high spatial resolution reflectivity increasing degree is bigger, is 6.448 times of R=0.915 of low spatial resolution, P<0.001.This illustrates that this wave band reflectivity of such atural object is at t m, t nConstantly bigger variation has taken place, the near infrared reflectivity that causes such as vegetation growth sharply increases.Argumentation according to theoretical foundation, that each similar pixel has all reflected is low, the conversion coefficient between the high spatial resolution reflectivity, but because influences such as noise, usually can not only utilize some similar pixels to calculate V, such result has very big uncertainty, and linear regression can utilize the information of all similar pixels to obtain a more sane conversion coefficient.Because there is the difference of contribution in each similar pixel, in order to embody these difference, further also consider the weight of each sample among linear regression again, what present embodiment adopted is weighted least-squares method.Certainly, also can adopt other homing method.
Step 1064 is adjusted conversion coefficient according to uncertainty.Because data handling procedures such as geometrical registration, Atmospheric corrections are all brought certain error to reflectivity, if low spatial resolution pixel t m, t nReflectance varies constantly is too little, in error range, then is weighted and returns the slope V that obtains and have very big uncertainty.This may be following two kinds of situations: the atural object area occupied ratio that low spatial resolution pixel internal reflection rate changes is very little, can not embody this variation on whole low spatial resolution pixel yardstick; Also may be that the clutter reflections rate that has in the low spatial resolution pixel increases, the clutter reflections rate that has reduces, and the amount that increases and reduce equates, makes not embody to change on low spatial resolution pixel yardstick.For addressing this problem, in the present embodiment, conversion coefficient is made as 1 during less than the variable quantity of at least two known moment reflectivity uncertain in the low spatial resolution reflectance varies average of all similar pixels.The uncertain u that supposes each wave band of reflectivity is peaked 1% for this wave band, and this uncertainty is mutually independently between each wave band of each image, and the uncertainty of the variable quantity of two known moment reflectivity is so: u = u tm 2 + u tn 2 - - - 20
Wherein, u TmBe u TnBe respectively t m, t nThe uncertainty of moment reflectivity.If the uncertainty that the low spatial resolution reflectance varies average of all similar pixels calculates less than formula 20 then can only be with the second variable quantity mean allocation to each high spatial resolution pixel of its inside, promptly the conversion coefficient V of variable quantity is 1.Certainly, the uncertainty of the variable quantity of known moment reflectivity also can obtain by other statistical method.To the different expression waies of formula 20, and the account form of other conversion all should be included within protection scope of the present invention.The variation of so just having avoided reflectivity between two moment too hour because reflectivity self has certain uncertainty, error just, and cause calculating false conversion coefficient V.
Step 1066 is revised unusual conversion coefficient.Because the existence of image self-noise causes the bigger conversion coefficient of absolute value to occur.For revising indivedual unusualr conversion coefficients, the conversion coefficient of all pixels of full width image is added up, in the present embodiment 2 times of standard deviation conversion coefficient V in addition about the average are set at 1.Certainly, also can as required the conversion coefficient beyond the standard deviation of other preset multiple be set at 1.
Fig. 7 comprises as shown in the figure for calculating the flow process of the high spatial reflectivity of predicting target pixel constantly: step 10802, calculate second variable quantity in each known relatively moment of similar pixel according to prediction low spatial resolution image constantly; Step 10804 is converted to first variable quantity according to described conversion coefficient with described second variable quantity; Step 10806 obtains first variable quantity of each similar pixel after by described weight weighted mean first variable quantity of target pixel; With, step 10808, first variable quantity and the high spatial resolution reflectivity addition in the known moment of described target pixel obtained based on the prediction in the different known moment high spatial resolution reflectivity of target pixel constantly.Finished the calculating of formula 14 by above-mentioned 4 steps.Be respectively t with two groups m, t nMoment of obtaining in the known moment between or near any t pLow spatial resolution image constantly calculates t pHigh spatial resolution reflectivity constantly is an example.According to formula 14, by known t m, t nHigh and low spatial resolution image can calculate t constantly pHigh spatial resolution reflectivity constantly, t at this moment m, t nHigh and low spatial resolution reflectivity constantly is all known, so can be respectively by t m, t nHigh spatial resolution albedometer is constantly calculated t pHigh spatial resolution reflectivity Fx constantly W/2, y W/2, t p, B remembers these two result of calculations respectively and to make F mx W/2, y W/2, t p, B, F nx W/2, y W/2, t p, B.Certainly, as long as known transition coefficient, prediction low spatial resolution image and the weight of each similar units high spatial resolution reflectivity that just can on the basis of the high spatial resolution image in the known moment, calculate the target pixel in the prediction moment constantly.To the different expression waies of formula 14, and the account form of other conversion all should be included within protection scope of the present invention.
Step 10810, weighted calculation obtain predicting high spatial resolution reflectivity constantly.For making full use of t m, t nThe information of high spatial resolution reflectivity constantly gets predicting the outcome to the end with two result of calculation weighted sums.Suppose t pThe known relatively moment of low spatial resolution reflectivity constantly changes more little, and then its result calculated more accurately and reliably.This is because if do not embody the variation of reflectivity on the low spatial resolution image, thinks that then high spatial resolution image does not have the variation of reflectivity yet.Based on above hypothesis, the time weighting T of two result of calculations is calculated by formula 21:
T k = 1 / | Σ i = 1 w Σ j = 1 w C ( x i , y j , t k , B ) - Σ i = 1 w Σ j = 1 w C ( x i , y j , t p , B ) | Σ k = m , n ( 1 / | Σ i = 1 w Σ j = 1 w C ( x i , y j , t k , B ) - Σ i = 1 w Σ j = 1 w C ( x i , y j , t p , B ) ) | , ( k = m , n ) - - - 21
After obtaining time weighting T, t pHigh spatial resolution reflectivity Fx constantly W/2, y W/2, t p, B can be calculated by following formula:
F(x w/2,y w/2,t p,B)=T m×F m(x w/2,y w/2,t p,B)+T n×F n(x w/2,y w/2,t p,B)22
Certainly, also can adopt other method to calculate the weight of the high spatial resolution reflectivity that calculates based on the known moment of difference.For example, suc as formula 23, with prediction moment t 0With known moment t kThe difference of low spatial resolution reflectivity is calculated the weight size.
T ijk=|C(x i,?y j,t 0)-C(x i,y j,t k)|23
So far obtained the high spatial resolution reflectivity of final prediction target pixel constantly.Only be to use conversion coefficient matching step 108 separately just can improve computational accuracy to little atural object and linear ground object.The screening of similar pixel and the improvement of weight calculation then help the raising of overall precision.
In order better present embodiment to be illustrated further and to verify, below present embodiment is carried out the check of simulated data and True Data respectively.
The simulated data check.For better the accuracy and the reliability of the method for inspection below apply to present embodiment simple simulated data.And compare for the ease of STARFM method, utilize the simulated data in the GAO literary composition to test with GAO.Present embodiment abbreviates MDCM as hereinafter.
The situation that phenology changes.Phenology is the main cause that causes reflectance varies, especially in the bigger zone of vegetation coverage.As shown in figure 15 for simulating the situation that phenology changes.Wherein the reflectivity of the border circular areas in (a)-(c) is fixed as 0.05, and the reflectivity of peripheral region becomes 0.2 again to 0.4 from 0.1; (d)-(f) respectively by the low spatial resolution image of (a)-(c) be polymerized; (g) and (h) be respectively to utilize MDCM and STARFM predicting the outcome to (b).Only there are two class atural objects in simple hypothesis, be respectively water body and vegetation, and fixed in shape is constant.Suppose that circular water body has fixing reflectivity 0.05, and the reflectivity of vegetation is increased to 0.2 Figure 15 (b) again to 0.4 Figure 15 (c) from 0.1 Figure 15 (a) around the water body, representative changes along with phenology, and the near infrared reflectivity that the growth of vegetation causes increases.Figure 15 (d)-(f) is the landsat that the image (a)-(c) that Figure 15 (a)-(c) aggregates into low spatial resolution by 17*17 is analogous to 30 meters resolution, (d)-(f) is analogous to the MODIS of 500 meters resolution.Utilize Figure 15 (a) and Figure 15 (c)-(f) prediction Figure 15 (b).Figure 15 (g) and (h) be respectively the result of MDCM method and STARFM, Figure 15 (g) and (h) and Figure 15 (b) in full accord, illustrate that the high spatial resolution reflectivity that two kinds of methods generate can both reflect exactly because the reflectance varies that phenology causes.It is the most ubiquitous phenomenon in the real image that phenology changes caused reflectance varies, illustrates that MDCM method and STARFM can both apply to real image, generates high-spatial and temporal resolution reflectivity image.
The situation of linear ground object.In soil covering or land use pattern, linear ground object is quite general, such as road, bridge, river etc., and human eye is higher to the sensitivity of linear ground object, and the correctness of linear ground object directly influences the visual effect of image in the high resolution image of generation.As shown in figure 16 for simulating the situation of linear ground object.Figure 16.The reflectivity of the border circular areas (a)-(c) is fixed as 0.05, and the reflectivity of straight line is fixed as 0.5, and the reflectivity of peripheral region becomes 0.2 again to 0.4 from 0.1; (d)-(f) respectively by the low spatial resolution image of (a)-(c) be polymerized; (g), (h) is MDCM and STARFM predicting the outcome to wire clutter reflections rate in (b).
Figure 16 (a)-(c) increases by a linear ground object on diagonal line on the basis of Figure 15 (a)-(c), reflectivity is fixed as 0.5, simulate on the road land and the bridge water body on, Figure 16 (d)-(f) is respectively by the low spatial resolution image of Figure 16 (a)-(c) be polymerized.Utilize 5 width of cloth images outside Figure 16 (b) to regenerate Figure 16 (b), Figure 16 (g) has shown Figure 16 (b) that MDCM and STARFM regenerate, and visible MDCM and STARFM can both dope this linear ground object qualitatively.Be MDCM and STARFM predicting the outcome as shown in figure 17 to the reflectivity of linear ground object among Figure 16 (b).From the quantitative result of two kinds of methods shown in Figure 17 to the linear ground object prediction of Figure 16 (b), STARFM can not obtain the reflectivity of linear ground object entirely truely, mistake mainly appears at outside one section circular atural object that the linear ground object peripheral region taken place to change, and the MDCM method can obtain the reflectivity of right-on linear ground object.
The situation of small size atural object.Small size atural object also is very ubiquitous in actual image, particularly in the relatively more broken zone of atural object patch, such as haggard or small size water body etc.If the size of these small size atural objects is less than the yardstick of low spatial resolution pixel, the reflectivity information that in the low spatial resolution image, can not clearly observe their shape and obtain them then, and the high spatial resolution image that regenerates wishes to reflect exactly the shape and the reflectivity information of these little atural objects.As shown in figure 18 for simulating the situation of small size atural object.The reflectivity of the border circular areas (a)-(c) becomes 0.2 again to 0.4 from 0.1, and the diameter of 4 circular atural objects is followed successively by 5,10,15,20 high spatial resolution units among each figure, and the reflectivity of peripheral region is fixed as 0.05; (d)-(f) respectively by the low spatial resolution image of (a)-(c) be polymerized; (g) and (h) be respectively STARFM and MDCM method predicting the outcome to (b).
Suppose only to exist two class atural objects, background around the circular atural object among Figure 18 (a)-(c) reaches.The diameter of 4 circular atural objects is followed successively by 4 small size atural objects of 5,10,15,20 among the figure, and the reflectivity of circular atural object is increased to 0.2 Figure 18 (b) again to 0.4 Figure 18 (c) from 0.1 Figure 18 (a), and the reflectivity of peripheral region is fixed as 0.05.Figure 18 (d)-(f) is the image that Figure 18 (a)-(c) is aggregated into low spatial resolution by 17*17.Figure 18 (g) and (h) be respectively STARFM and MDCM method reconstructed results to Figure 18 (b), and be MDCM and the STARFM predicated error to each circular little atural object among Figure 18 (b) as shown in figure 19.Figure 19 has shown the two mean value to the difference of the little atural object of prediction average error each the pixel reflectivity predicted value of the little atural object of different-diameter and actual value, as seen to have only diameter be that the reflectivity of 20 circular atural object is correct to STARFM, other reflectivity less than the atural object of low spatial resolution pixel yardstick 17 are all less than normal than actual value, and the MDCM method can both generate correct reflectivity to the atural object of all sizes.Reason mainly is that STARFM needs pure pixel that change information is provided, and on the low spatial resolution image, do not have pure pixel less than the small size atural object of low spatial resolution pixel yardstick, so error appears in the reflectivity that generates, if and the MDCM method is when can not find pure pixel in window, by the scale-up factor between the low spatial resolution pixel reflectance varies amount at these small size clutter reflections rate variable quantities and its place, the change information that non-pure pixel is provided in addition correction, so can generate the reflectivity of these small size atural objects exactly.
The True Data check.The MDCM method is applied to real Landsat-7 and MODIS remote sensing image.The same with the simulation data, for the STARFM with Gao etc. compares, use remotely-sensed data in the humanity such as Gao test data can Http:// ledaps.nascom.nasa.gov/ledaps/Tools/StarFM.htmDownload.The spatial resolution of Landsat-7ETM+ data is 30 meters, and the spatial resolution of MODIS data is 500 meters, and wave band is 4,1,2 wave bands of green glow, ruddiness, 3 wave band MODIS of near infrared, respectively 2,3,4 wave bands of corresponding ETM+.
The geographic position of data institute overlay area is 54 ° of N, 104 ° of W, and size is 12km * 12km, and main face of land cover type is vegetation, water body, bare area, road etc., and the growth season of vegetation is shorter, and phenology changes very fast.Image has 3 groups, obtains constantly to be respectively May 24 calendar year 2001, July 11 calendar year 2001, August 12 calendar year 2001.As Figure 20 for MODIS go up row and Landsat down row's false colour composite image closely red-colored synthesizing of the corresponding R-G-B of red-green wave band.From left to right obtaining constantly is May 24 calendar year 2001, July 11 calendar year 2001, August 12 calendar year 2001 successively.Above a row for the MODIS image, below a row be the Landsat image.Utilize May 24 and August 12 purposes, two group images and July 11 purpose MODIS image generate the image of 11 purpose Landsat resolution in July, and with the result with real July 11 the Landsat image compare, estimate the precision of MDCM method.According to this regional characteristics, window w is set to i.e. 50 * 50 the Landsat pixels of 3 * 3 MODIS pixels in the present embodiment, and type of ground objects quantity m is set to 4.Figure 21 shown respectively generate by STARFM and MDCM method July Landsat resolution on the 11st false colour composite image.Be a Landsat image left side and on real July 11 as shown in figure 21 by among the STARFM and the right Landsat resolution image that generates of MDCM method.
As seen from Figure 21, image and real image that the MDCM method generates are visually very approaching, and minutia keeps better, and some minutia as a result of STARFM is not outstanding, and the smoothed effect of image is visually arranged.This mainly is that for these little atural object pixels, STARFM can not generate correct high resolving power reflectivity because small atural object does not have corresponding pure pixel to exist on the low spatial resolution of correspondence, and the MDCM method can overcome this problem.
Be the precision of two kinds of methods and resultses of quantitative comparison, now calculate the absolute error between prediction reflectivity and the real reflectance, promptly predict the absolute value of the difference of reflectivity and real reflectance.Be each wave band scatter diagram of real image and reconstructed image as shown in figure 22.A last row is the STARFM result calculated, and next row is the MDCM result calculated.R is a related coefficient, and AD is a mean absolute error.
Figure 22 has shown each wave band as a result of two kinds of methods and the scatter diagram between the actual value.For green light band green band, it is higher to some extent that the result of MDCM method compares actual value, makes that the absolute error average AD of view picture image is 0.0039, a little more than 0.0033 of STARFM; For red spectral band red band, the result of MDCM method is slightly better than STARFM, the coefficient R of predicted value and actual value is 0.90, absolute error is 0.0034, and the R of STARFM is 0.89, and absolute error is 0.0037, from scatter diagram, the result of MDCM method is higher, and the result of STARFM is on the low side; For near-infrared band NIRband, the result of MDCM method obviously is better than STARFM, the coefficient R of predicted value and actual value is 0.95, absolute error is 0.0103, and the R of STARFM is 0.85, and absolute error is 0.0155, from scatter diagram, the result of MDCM method and actual value basically identical, and STARFM has the reflectance value of a lot of pixels lower than actual value, and the pixel that these the are underestimated small size atural object plate in the image just.
The present invention also provides a kind of disposal system of remote sensing image accordingly.Two group images of remote sensing image for the shooting of the same face of land is obtained, wherein, first group image has high spatial resolution, and second group image has low spatial resolution, tries to achieve t according to described two group images pThe A image of high spatial resolution constantly, described A image is made up of at least one pixel to be calculated.Below in conjunction with Fig. 8~Figure 14 the embodiment of the disposal system of remote sensing image is introduced.
As shown in Figure 8, the disposal system of remote sensing image comprises with lower module: image is selected module 101, second group image that is used to obtain first group image with high spatial resolution of same ground table section and has low spatial resolution, the images in corresponding at least two pairs of different moment of selection from described two group images; Target pixel setting module 102, a pixel that is used to set certain image A0 that asks constantly is the target pixel, the ground table section of determining target pixel correspondence is the target area; Target pixel reflectivity calculates module 104, is used to calculate the reflectivity of target pixel; With, image generation module 108, the reflectivity that is used for calculating according to target pixel reflectivity each target pixel that module 104 calculates generates the A0 image.
Described target pixel reflectivity calculates module 104 and comprises following submodule:
Conversion coefficient module 103, the reflectance varies amount of calculating described target area according to the selected image conversion coefficient between in two group images.As shown in Figure 9, in the present embodiment, described conversion coefficient module 103 comprises: center pixel setting module 1031 is used for the pixel corresponding with described target area of selected first group image is set at the center pixel; Similar pixel screening module 1032, be used for from described center pixel separately the place image filter out at least one respectively and belong to the similar pixel of similar atural object with described center pixel; Conversion coefficient computing module 1033 is used for determining that each pairing ground of similar pixel table section is a similar area, calculates described conversion coefficient according to the reflectivity of described similar area in selected image.Adjusting module 1034 is used to adjust possible incorrect conversion coefficient and the unusual conversion coefficient of correction that error causes.Adjusting module 1034 will describe in detail in conjunction with Figure 14 hereinafter.In the present embodiment, the calculating of conversion coefficient described in the described conversion coefficient computing module 1033 is to be independent variable with the described reflectivity that obtains with second variable quantity, and to be dependent variable be weighted regretional analysis according to the weight of each similar pixel to first variable quantity.The specific implementation of the screening of similar pixel can not repeat them here referring to the above explanation of step 1022.
Weight module 109 is used for calculating according to selected image the weight of each similar pixel.Hereinafter will describe in detail in conjunction with Figure 11 and Figure 12.
The second variable quantity computing module 105, be used for selecting and described image A0 moment corresponding image B0 and at least one other image constantly, calculate the variable quantity of the reflectivity of described target area in each described other image constantly with respect to its reflectivity in image B0 according to selected image from second group image.As shown in figure 10, the described second variable quantity computing module 105 comprises: the similar pixel second variable quantity computing module 1051, be used for selecting and described certain moment corresponding image B0 and at least one other image constantly, calculate the reflectance varies amount of each described similar area in second group image according to selected image from second group image; The second variable quantity weighted mean module 1052, the reflectivity of each described target area of reflectance varies amount weighted average calculation in each described other image constantly that the similar pixel second variable quantity computing module 1051 is obtained according to the described weight that obtains is with respect to the variable quantity of its reflectivity in image B0.
Variable quantity modular converter 106 is used for being converted to each reflectance varies amount at first group image according to each variable quantity that described conversion coefficient obtains described each second variable quantity computing module 105.
Reflectivity calculates module 107, and the reflectivity of described target area is calculated in each the reflectance varies amount and the target area that are used for obtaining according to variable quantity modular converter 106 at the reflectivity of each image in first group of corresponding moment.Calculate the calculating that module 107 has been finished formula 14 via the second variable quantity weighted mean module 1052, variable quantity modular converter 106 and reflectivity in the present embodiment, obtained the reflectivity of target pixel.Image generation module 108, the reflectivity that is used for calculating one by one all pixels to be calculated generates the A0 image.
In another embodiment, target pixel reflectivity calculates module 104 and also comprises: time weighting computing module 111 is used for calculating according to the second variable quantity computing module, 105 resulting variable quantities the time weighting of each reflectivity of resulting described target area.Reflectivity calculates module 107 and also comprises: reflectivity weighted calculation module, be used for according to described time weighting, and each reflectivity weighted mean is obtained the reflectivity of the target pixel of asking.The specific implementation of the calculating of the calculating of time weighting and final goal pixel reflectivity can not repeat them here referring to the explanation in the step 10810 above.
As shown in figure 11, described weight module 109 comprises: purity computing module 1091 is used for calculating according to selected image the consistent degree of all band spectrum vectors of pixel of all band spectrum vector sum second each described similar areas of group image of each similar pixel; Pure pixel judge module 1093 is used for judging whether similar pixel has pure pixel, and should pure pixel weight setting be maximal value when pure pixel, that is, all changes information is all taken from this pure pixel, does not adjust the weight of each similar pixel when not having pure pixel; Space length computing module 1094 is used to calculate each similar pixel distance space length of center pixel separately; Weight computation module 1095 is used for according to the purity of each similar pixel and the weight of calculating each similar pixel apart from the space length of center pixel separately.The specific implementation of each module can not repeat them here referring to the above explanation of step 104 in the weight module 109.
As shown in figure 12, in another embodiment, described weight module 109 comprises: purity computing module 1091, be used for according to described at least two pairs constantly corresponding image calculate the consistent degree of corresponding all the band spectrum vectors of pixel of all band spectrum vector sums of each similar pixel second group image ground table section corresponding with its moment; Similar pixel weight computation module 1092 is used for calculating according to described consistent degree the weight of each similar pixel.Concrete purity calculates and weight calculation can not repeat them here with reference to a last embodiment of weight module 109.
As shown in figure 13, in another embodiment, described similar pixel screening module 1032 also comprises: the screening module 10321 of occuring simultaneously, be used for pairing ground of the similar pixel table section that respectively filters out is got common factor, and be similar pixel with the face of land area relative pixel of getting behind the common factor.The specific implementation of the screening module 10321 of occuring simultaneously can not repeat them here referring to the above explanation of step 1024.
As shown in figure 14, described adjusting module 1034 comprises: average judge module 10341, and whether the average of second variable quantity that is used to judge each similar pixel correspondence is less than reflectivity self error; Error adjusting module 10342, be used for when average judge module (10341) be judged as less than the time, conversion coefficient is set at and makes first variable quantity equal second variable quantity, when average judge module (10341) is judged as when being not less than, conversion coefficient remains unchanged; Unusual conversion coefficient judge module 10343 is used to judge whether conversion coefficient is unusual; Unusual conversion coefficient correcting module 10344 is used for unusual conversion coefficient is modified to.The specific implementation of adjusting module 1034 can not repeat them here referring to the above explanation of step 1064 and step 1066.
The above only is preferred embodiment of the present invention, and is in order to restriction the present invention, within the spirit and principles in the present invention not all, any modification of being done, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. the disposal route of a remote sensing image, it is characterized in that, a pixel of setting certain image A0 that asks constantly is the target pixel, the ground table section of determining target pixel correspondence is the target area, obtain first group image with high spatial resolution and second group image of same ground table section with low spatial resolution, the images in corresponding at least two pairs of different moment of selection from described two group images, the reflectivity that calculates each target pixel according to the following steps respectively is to generate the A0 image:
A, the conversion coefficient of the reflectance varies amount of calculating described target area according to selected image between in two group images;
B selects from second group image and described image A0 moment corresponding image B0 and at least one other image constantly; Calculate the variable quantity of the reflectivity of described target area in each described other image constantly according to selected image with respect to its reflectivity in image B0;
C is converted to each reflectance varies amount in first group image according to described conversion coefficient with the variable quantity of described each reflectivity; With
D selects from first group image and described other moment corresponding image, calculates the reflectivity of described target area according to each reflectance varies amount and the target area reflectivity in selected image that is converted to.
2. according to the described method of claim 1, it is characterized in that described step a comprises:
A1 is set at the center pixel with the pixel corresponding with described target area in selected first group image;
A2 filters out at least one separately the image of place respectively from described center pixel and belongs to the similar pixel of similar atural object with described center pixel; With
A3 determines that each pairing ground of similar pixel table section is a similar area, calculates described conversion coefficient according to the reflectivity of described similar area in selected image.
3. according to the described method of claim 2, it is characterized in that, also comprise before the described step a3:
A21 calculates the weight of each similar pixel according to selected image;
Described step b comprises:
B1 calculates the reflectance varies amount of each described similar area in second group image according to B0 image and other described second group images constantly;
B2, the reflectivity of each described target area of reflectance varies amount weighted average calculation in each described other image constantly that step b1 is obtained according to the described weight that obtains is with respect to the variable quantity of its reflectivity in image B0.
4. according to the described method of claim 2, it is characterized in that, the calculating of conversion coefficient described in the described step a3 is to be independent variable with the reflectivity of each described similar area in selected image with the variable quantity of the reflectivity in second group image, and the variable quantity of the reflectivity in first group image is that dependent variable is carried out regretional analysis.
5. according to the described method of claim 4, it is characterized in that, also comprise before the described step a3:
A22 calculates the weight of each similar pixel according to selected image;
Described regretional analysis is the weighted regression analysis that carries out according to the weight of each similar pixel.
6. according to claim 3 or 5 described methods, it is characterized in that the calculating of the weight of described each similar pixel may further comprise the steps:
A221 calculates the consistent degree of all band spectrum vectors of pixel of each described similar area in all band spectrum vector sum second group images of each similar pixel according to selected image;
A222 calculates the weight of each similar pixel according to described consistent degree.
7. according to the described method of claim 2, it is characterized in that described step a2 also comprises:
Pairing ground of the similar pixel table section that respectively filters out is got common factor, is similar pixel with the face of land area relative pixel of getting behind the common factor.
8. the disposal system of remote sensing image, it is characterized in that, try to achieve the A0 image of high spatial resolution according to first group image with high spatial resolution of same ground table section and second group image with low spatial resolution, described A0 image does not belong to first group image, and form by at least one pixel to be calculated, described disposal system comprises:
Image is selected module (101), second group image that is used to obtain first group image with high spatial resolution of same ground table section and has low spatial resolution, the images in corresponding at least two pairs of different moment of selection from described two group images;
Target pixel setting module (102), a pixel that is used to set certain image A0 that asks constantly is the target pixel, the ground table section of determining target pixel correspondence is the target area;
Target pixel reflectivity calculates module (104), is used to calculate the reflectivity of target pixel; With
Image generation module (108), the reflectivity that is used for calculating according to target pixel reflectivity each target pixel that module (104) calculates generates the A0 image;
Described target pixel reflectivity calculates module (104) and comprises following submodule:
Conversion coefficient module (103), the reflectance varies amount of calculating described target area according to the selected image conversion coefficient between in two group images;
The second variable quantity computing module (105), be used for selecting and described image A0 moment corresponding image B0 and at least one other image constantly, calculate the variable quantity of the reflectivity of described target area in each described other image constantly with respect to its reflectivity in image B0 according to selected image from second group image;
Variable quantity modular converter (106) is used for being converted to each reflectance varies amount at first group image according to each variable quantity that described conversion coefficient obtains described each second variable quantity computing module (105); With
Reflectivity calculates module (107), be used for selecting and described other moment corresponding image from first group image, each reflectance varies amount and the target area reflectivity in selected image that obtains according to variable quantity modular converter (106) calculates the reflectivity of described target area.
9. described according to Claim 8 system is characterized in that, described conversion coefficient module (103) comprising:
Center pixel setting module (1031) is used for the pixel corresponding with described target area of selected first group image is set at the center pixel;
Similar pixel screening module (1032), be used for from described center pixel separately the place image filter out at least one respectively and belong to the similar pixel of similar atural object with described center pixel; With
Conversion coefficient computing module (1033) is used for determining that each pairing ground of similar pixel table section is a similar area, calculates described conversion coefficient according to the reflectivity of described similar area in selected image.
10. according to the described system of claim 9, it is characterized in that described target pixel reflectivity calculates module (104) and also comprises:
Weight module is used for calculating according to selected image the weight of each similar pixel;
The described second variable quantity computing module (105) comprising:
The similar pixel second variable quantity computing module (1051), be used for selecting and described certain moment corresponding image B0 and at least one other image constantly, calculate the reflectance varies amount of each described similar area in second group image according to selected image from second group image;
The second variable quantity weighted mean module (1052), the reflectivity of each described target area of reflectance varies amount weighted average calculation in each described other image constantly that the similar pixel second variable quantity computing module (1051) is obtained according to the described weight that obtains is with respect to the variable quantity of its reflectivity in image B0.
CN2009100793750A 2009-03-09 2009-03-09 Remote-sensing image processing method and system Expired - Fee Related CN101482929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100793750A CN101482929B (en) 2009-03-09 2009-03-09 Remote-sensing image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100793750A CN101482929B (en) 2009-03-09 2009-03-09 Remote-sensing image processing method and system

Publications (2)

Publication Number Publication Date
CN101482929A CN101482929A (en) 2009-07-15
CN101482929B true CN101482929B (en) 2010-08-25

Family

ID=40880029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100793750A Expired - Fee Related CN101482929B (en) 2009-03-09 2009-03-09 Remote-sensing image processing method and system

Country Status (1)

Country Link
CN (1) CN101482929B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101900817B (en) * 2009-05-27 2013-01-16 中国科学院地理科学与资源研究所 Universal remote sensing data rule gridding method
CN103034981B (en) * 2012-12-18 2015-06-24 武汉大学 Multi-temporal data based remote sensing image weighted regression recovery method
CN103226832B (en) * 2013-05-07 2015-08-05 西安电子科技大学 Based on the multi-spectrum remote sensing image change detecting method of spectral reflectivity mutation analysis
CN106780514B (en) * 2016-12-28 2019-11-19 南京信息工程大学 The calculation method of the area heavy rain Ji Lao depth of accumulated water based on monitor video image
CN109960972B (en) * 2017-12-22 2020-11-10 北京航天泰坦科技股份有限公司 Agricultural and forestry crop identification method based on middle-high resolution time sequence remote sensing data
CN109253976B (en) * 2018-10-22 2021-01-15 北京麦飞科技有限公司 High-spectrum real-time radiometric calibration method based on light sensing module
CN111353402B (en) * 2020-02-24 2021-03-30 中国科学院地理科学与资源研究所 Remote sensing extraction method for oil palm forest
CN112906531B (en) * 2021-02-07 2023-05-23 清华苏州环境创新研究院 Multi-source remote sensing image space-time fusion method and system based on non-supervision classification
CN113160100A (en) * 2021-04-02 2021-07-23 深圳市规划国土房产信息中心(深圳市空间地理信息中心) Fusion method, fusion device and medium based on spectral information image
CN115359369B (en) * 2022-10-19 2023-01-24 中国科学院、水利部成都山地灾害与环境研究所 Mountain satellite image fusion method and system based on time phase self-adaption

Also Published As

Publication number Publication date
CN101482929A (en) 2009-07-15

Similar Documents

Publication Publication Date Title
CN101482929B (en) Remote-sensing image processing method and system
CN104266982B (en) A kind of large area insect pest quantifies monitoring system
Möller et al. Coupling of phenological information and simulated vegetation index time series: Limitations and potentials for the assessment and monitoring of soil erosion risk
CN107103584A (en) A kind of production high-spatial and temporal resolution NDVI weighted based on space-time method
CN103914678B (en) Abandoned land remote sensing recognition method based on texture and vegetation indexes
CN105865424A (en) Nonlinear model-based multispectral remote sensing water depth inversion method and apparatus thereof
Montzka et al. Modelling the water balance of a mesoscale catchment basin using remotely sensed land cover data
CN108169161A (en) A kind of corn planting regional soil humidity appraisal procedure based on modified MODIS indexes
CN105809148A (en) Crop drought recognition and risk evaluation method based on remote sensing time-space-spectrum fusion
CN102034337A (en) System and method for prairie snow disaster remote sensing monitoring and disaster situation evaluation
CN110321774B (en) Crop disaster situation evaluation method, device, equipment and computer readable storage medium
CN110222870A (en) Assimilate the Regional Fall Wheat yield estimation method of satellite fluorescence data and crop growth model
CN110363246A (en) A kind of fusion method of high-spatial and temporal resolution vegetation index NDVI
CN115372282B (en) Farmland soil water content monitoring method based on hyperspectral image of unmanned aerial vehicle
CN106779067A (en) Soil moisture method for reconstructing and system based on multi- source Remote Sensing Data data
CN110866364A (en) Ground surface temperature downscaling method based on machine learning
CN102354328A (en) Leaf area index (LAI) product inversion method and system for global earth surface
CN107941713A (en) A kind of rice yield estimation method based on coupling crop modeling assimilation spectral reflectivity
CN103576132A (en) Processing method and system for remote-sensing images
CN114819737B (en) Method, system and storage medium for estimating carbon reserves of highway road vegetation
CN110503137A (en) Based on the determination method of the remote sensing image temporal-spatial fusion base image pair of mixing together
Jiang et al. Synergistic use of optical and InSAR data for urban impervious surface mapping: a case study in Hong Kong
CN105842245A (en) Method for assessing rice yield
Weiss et al. Mapping leaf area index measurements at different scales for the validation of large swath satellite sensors: first results of the VALERI project
CN113743819B (en) Crop yield estimation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100825

Termination date: 20110309