CN103914817A - Multi-spectral and full-color image fusion method on basis of regional division and interpolation - Google Patents

Multi-spectral and full-color image fusion method on basis of regional division and interpolation Download PDF

Info

Publication number
CN103914817A
CN103914817A CN201410077181.8A CN201410077181A CN103914817A CN 103914817 A CN103914817 A CN 103914817A CN 201410077181 A CN201410077181 A CN 201410077181A CN 103914817 A CN103914817 A CN 103914817A
Authority
CN
China
Prior art keywords
multispectral image
image
region
gray
low resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410077181.8A
Other languages
Chinese (zh)
Other versions
CN103914817B (en
Inventor
刘芳
石程
李玲玲
焦李成
郝红侠
戚玉涛
武杰
段一平
马晶晶
尚荣华
于昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410077181.8A priority Critical patent/CN103914817B/en
Publication of CN103914817A publication Critical patent/CN103914817A/en
Application granted granted Critical
Publication of CN103914817B publication Critical patent/CN103914817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a multi-spectral and full-color image fusion method on the basis of regional division and interpolation. The method comprises the realization steps: 1) a primal sketch image of a high-resolution full-color image is extracted; 2) a geometry template is constructed; 3) a region map dividing the high-resolution full-color image into a structural region and a non-structural region is obtained; 4) a region map dividing the structural region, a texture region and a smooth region is obtained; 5) an initial multi-spectral image with the size same as that of the high-resolution full-color image is obtained; 6) the initial multi-spectral image is divided into a structural region, a texture region and a smooth region; 7) a predicted multi-spectral image is obtained; 8) calculation of a grey value of each pixel point on the predicted multi-spectral image is finished, and a fusion image is obtained. The fusion method overcomes the defect that spectral information in the fusion image is distorted or the spatial resolution is insufficient, and improves the quality of the fusion image.

Description

A kind of multispectral and panchromatic image fusion method based on region division and interpolation
Technical field
The invention belongs to image processing field, be specifically related to a kind ofly divide and the multispectral and panchromatic image fusion method of interpolation based on region, can be used for to the technology in weather monitoring, military target identification, environmental monitoring, city planning and multiple fields such as prevent and reduce natural disasters.
Background technology
Along with the development of remote sensing technology, the multi-source Remote Sensing Images data of being obtained areal by various satellite sensor earth observations are more and more, for identification, the environmental monitoring etc. of military target provide abundant and valuable data.The major obstacle of at present remote sensing technology application is from wide data source, to extract abundanter, the more useful and capacity of water of authentic communication more.This just requires us to make full use of the redundancy between multi-source image data, and the error and the uncertainty that merge to reduce multi-source image improve discrimination and degree of accuracy.Multi-source Remote Sensing Image Fusion, the fusion of especially multispectral and full-colour image, be considered to modern multi-source image process and analyze in a very important step.
At present, multispectral and the panchromatic image fusion method using on market mainly contains three classes, one class is the fusion method of traditional spatial alternation, and a class is the fusion method based on multi-scale transform, and also having a class is the fusion methods based on dictionary learning of at present a lot of scholars in research.
Traditional fusion method based on spatial alternation mainly contains HIS conversion, PCA conversion, Gram-Schmidt conversion, and Brovery conversion etc.This several method is because of its lower computation complexity, so be often used in multiple business softwares.This several method can effectively improve the spatial resolution of fused images, but the spectrum distortion situation that they produce in fusion process, although have some improvement for the spectrum distortion situation of fused images by adaptive HIS, P+XS method, the effect that still can not reach.
Method based on transform domain mainly contains fusion method based on Laplace transform, fusion method based on Wavelet conversion, and fusion method based on multi-scale geometric analysis, as Contourlet, Bandlet and Shearlet etc.The spectrum distortion situation tool that these class methods produce for spatial transform has a better role, but its spatial resolution is received adopted method volume restriction, as Wavelet, conversion can only be three directions by picture breakdown, although Contourlet conversion waits can be many compared with Wavelet to the direction number of picture breakdown, go back number and remain limited.In the face of changing the lines of various image, limited direction is difficult to reach optimum approaching, thereby has affected the detailed information of fused images.
The 3rd class is the fusion method based on dictionary learning.The generation of these class methods is exactly in order to overcome fixing base, and as Wavelet base, Contourlet base, the best approximation that is difficult to reach image proposes.Dictionary learning can be according to the content study dictionary of image, and to reach optimum, the most sparse the representing to image, these class methods can reach the raising of fused images detailed information, keep the spectral quality of fused images simultaneously.But these class methods, aspect dictionary training, need higher time complexity, are difficult to meet the requirement in market.
Summary of the invention
The object of the invention is to overcome the shortcoming that in prior art, spectral information and spatial resolution in low resolution multispectral image and High-resolution Panchromatic Images fusion process is difficult to balance, a kind of multispectral and panchromatic image fusion method based on region division and interpolation has been proposed, with the shortcoming of spectral information distortion or spatial resolution deficiency in solution fused images, improve the quality of fused images.
For this reason, the invention provides a kind of multispectral and panchromatic image fusion method based on region division and interpolation, comprise following steps:
Step 1: the Primal Sketch figure that extracts High-resolution Panchromatic Images;
Step 2: the direction of the Primal Sketch figure middle conductor obtaining according to step 1, centered by the point on line segment, the direction window that is 7 × 7 along the direction designed size of this line segment, constructive geometry masterplate;
Step 3: to the Primal Sketch figure obtaining in step 1, according to the geometric templates of structure, obtain dividing the region mapping graph that High-resolution Panchromatic Images is structural region and non-structural region;
Step 4: by the non-structural region obtaining in step 3, according to the variance statistical property of image, non-structural region is divided into texture region and smooth domain, thereby obtains the region mapping graph of partition structure region, texture region and smooth domain;
Step 5: low resolution multispectral image is adopted to arest neighbors interpolation method, obtain the initial multispectral image of image size and High-resolution Panchromatic Images formed objects, its computing formula is:
f HR(2i+m,2j+l)=f LR(i,j)(i=1,...,W,j=1,...,H,m=0,1,l=0,1) (1)
Wherein W represents the wide of low resolution multispectral image, and H represents the height of low resolution multispectral image, f lRrepresent low resolution multispectral image, f hRrepresent initial multispectral image;
Step 6: according to the region mapping graph obtaining in step 4, initial multispectral image is divided into structural region, texture region and smooth domain;
Step 7: the initial multispectral image that step 5 is obtained; respectively to its each sub-band images; adjust the gray-scale value of position in (2i+1,2j+1), for structure, texture region and smooth domain; adopt different interpolation and adjustment process to (2i+1; gray-scale value 2j+1) calculates, and obtains the multispectral image of correction, wherein i=1; ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image;
Step 8: on the basis of the multispectral image of the correction obtaining in step 7; respectively to its each sub-band images; adjust position in (2i; 2j+1) with (2i+1; gray-scale value 2j); for structure, texture region and smooth domain; adopt different interpolation and adjustment process to (2i; 2j+1) re-start calculating with (2i+1,2j) gray-scale value, complete the calculating of the gray-scale value to each pixel on initial multispectral image; obtain the multispectral image of new correction; wherein i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image;
Step 9: if the size of real High-resolution Panchromatic Images is 2 times of low resolution multispectral image size, the multispectral image that obtains new prediction in step 8 is fused images, if the size of real High-resolution Panchromatic Images is 4 times of real low resolution multispectral image size, need step 1-8 to carry out twice, in operational process for the first time, High-resolution Panchromatic Images in step 1 is that the down-sampling of real High-resolution Panchromatic Images dwindles the image of a times, executing for the first time after process, low resolution multispectral image in using the multispectral image obtaining in step 8 as implementation for the second time, High-resolution Panchromatic Images is real High-resolution Panchromatic Images, again perform step 1-8, obtain final fused images.
The multispectral image that obtaining described in above-mentioned steps 7 revised, carries out in accordance with the following steps:
1) each sub-band images to initial multispectral image respectively, adjusts the pixel of position in (2i+1,2j+1) in initial multispectral image, and computing formula is:
f HR ( 2 i + 1,2 j + 1 ) = Σ k = 0 1 Σ l = 0 1 α 2 k + l f HR ( 2 ( i + k ) , 2 ( j + l ) ) + δ ( 2 i + 1,2 j + 1 ) - - - ( 2 )
Wherein f hR(2 (i+k), 2 (j+l)) represent the gray-scale value that in initial multispectral image, position is located in (2 (i+k), 2 (j+l)), α 2k+lbe the weighting parameter of study interpolation, δ (2i+1,2j+1) is the adjustment parameter under High-resolution Panchromatic Images instructs, i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image;
2) calculation procedure 1) in weighting parameter α 2k+l, to each pixel in (2i+1,2j+1) that need to recalculate in initial multispectral image, judge whether this pixel is positioned at the smooth domain of region mapping graph, if so, weighting parameter complete the calculating of weighting parameter, otherwise, use NEDI interpolation method to calculate weighting parameter α → = [ α 0 , α 1 , α 2 , α 3 ] ;
3) according to step 2), calculate the multispectral image of prediction, computing formula is:
f ^ HR ( 2 i + 1,2 j + 1 ) = Σ k = 0 1 Σ l = 0 1 α 2 k + l f HR ( 2 ( i + k ) , 2 ( j + l ) ) - - - ( 3 )
Wherein the gray-scale value that in the multispectral image that represents to predict, position is located in (2i+1,2j+1), i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image;
4) calculation procedure 1) in adjustment parameter δ (2i+1,2j+1), multispectral image (2i+1, the 2j+1) point of the prediction that step 3) is obtained, calculates respectively the gradient function f of x direction and y direction x(δ), f y(δ), when point (2i+1,2j+1) is during in structural region, x direction is the direction of the structural region middle conductor at this place, and y direction is the direction vertical with line segment direction; In the time that this point is positioned at texture and structural region, x direction is horizontal direction, and y direction is vertical direction, f x(δ), f y(δ) computing formula is as follows:
f x ( δ ) = g x 1 + g x 2 - 2 ( f ^ HR ( 2 i + 1,2 j + 1 ) + δ ( 2 i + 1,2 j + 1 ) ) - - - ( 4 )
f y ( δ ) = g y 1 + g y 2 - 2 ( f ^ HR ( 2 i + 1,2 j + 1 ) + δ ( 2 i + 1,2 j + 1 ) ) - - - ( 5 )
Wherein the gray-scale value that in the multispectral image that represents to predict, position is located in (2i+1,2j+1), i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image, g x1and g x2be respectively centered by (2i+1,2j+1) point, along x direction, put recently the gray-scale value of fixation element really, g with (2i+1,2j+1) y1and g y2be to be respectively centered by (2i+1,2j+1) point, along y direction, put recently the gray-scale value of fixation element really with (2i+1,2j+1);
5) calculate respectively the x direction of (2i+1,2j+1) point on High-resolution Panchromatic Images and the Grad of y direction computing formula is as follows:
f ^ x = g ^ x 1 + g ^ x 2 - 2 g ^ - - - ( 6 )
f ^ y = g ^ y 1 + g ^ y 2 - 2 g ^ - - - ( 7 )
Wherein for the gray-scale value of point (2i+1,2j+1) on High-resolution Panchromatic Images, with be respectively on High-resolution Panchromatic Images position and g x1, g x2, g y1and g y2identical gray-scale value;
6) according to step 4) and 5), solve and adjust parameter δ (2i+1,2j+1) by formula (8):
min δ { ( f x ( δ ) - λ 1 f ^ x ) 2 + ( f y ( δ ) - λ 2 f ^ y ) 2 } - - - ( 8 )
Formula (G) first derivation is calculated to extreme point and be adjusted parameter δ (2i+1,2j+1); Intensive parameter λ in formula (8) 1and λ 2control the intensity of adjusting, computing formula is as shown in formula (9):
λ 1 = ( g x 1 - g x 2 ) / ( g ^ x 1 - g ^ x 2 ) , λ 2 = ( g y 1 - g y 2 ) / ( g ^ y 1 - g ^ y 2 ) - - - ( 9 )
In formula (9), when time, λ 1=1; time, λ 2=1;
7) according to step 2) and the weighting parameter that obtains of step 6) with adjustment parameter δ (2i+1,2j+1), according to formula (1), calculate in initial multispectral image and be positioned at (2i+1, the modified value of the gray-scale value of the point of 2j+1) locating, this modified value is replaced to the gray-scale value that is positioned at the point that (2i+1,2j+1) locate in initial multispectral image, obtain the multispectral image of revising.
The multispectral image that obtains new correction in above-mentioned steps 8, carries out in accordance with the following steps:
(1) respectively to each sub-band images of multispectral image of revising, adjust the uncertain pixel of position in (2i, 2j+1) and (2i+1,2j) in the multispectral image of revising, computing formula is:
f HR ( 2 i , 2 j + 1 ) = Σ k = [ - 2,0,0,2 ] l = [ 0 , - 2,2,0 ] α k , l f HR ( 2 i + k , 2 j + 1 + l ) ) + δ ( 2 i , 2 j + 1 ) - - - ( 10 )
f HR ( 2 i + 1 , 2 j ) = Σ k = [ - 2,0,0,2 ] l = [ 0 , - 2,2,0 ] α k , l f HR ( 2 i + 1 + k , 2 j + l ) ) + δ ( 2 i + 1 , 2 j ) - - - ( 11 )
Wherein i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image, α k,lbe the weighting parameter of interpolation, δ is the adjustment parameter under High-resolution Panchromatic Images instructs.Adjust parameter, weighting parameter and intensive parameter λ 1and λ 2calculation procedure identical with claim 2, thereby be positioned at (2i in the multispectral image that obtains revising, 2j+1) or (2i+1, the modified value of gray-scale value 2j), this modified value is replaced to the gray-scale value that is positioned at the point that (2i, 2j+1) or (2i+1,2j) locate in the multispectral image of revising, complete the calculating to each gray-scale value on initial multispectral image, obtain the multispectral image of new correction.
Advantage of the present invention is:
1, the present invention uses interpolation method to merge multispectral and full-colour image, Interpolation Process keeps the spectral information of image preferably, and full-colour image can improve the spatial resolution of fused images to the adjustment process of interpolating pixel under instructing, overcome traditional algorithm and be difficult to the problem of balance spectral information and spatial resolution, thereby improved the quality of fused images.
2, divide by region, take into full account the feature of zones of different, and the directional information of the line segment that provides of Primal Sketch figure, for zones of different, different interpolation computing methods is proposed, overcome traditional algorithm and in the process merging, do not considered the otherness of image zones of different, thus in fused images balance spectral information and spatial resolution.
Below with reference to accompanying drawing, the present invention is described in further details.
Brief description of the drawings
Fig. 1 is multispectral and full-colour image fusion process flow diagram of the present invention;
Fig. 2 (a) is High-resolution Panchromatic Images;
Fig. 2 (b) is the Primal sketch figure of Fig. 2 (a);
Fig. 2 (c) be with geometric templates size be 7 × 7 gained region mapping graphs;
Fig. 2 (d) is the structural region figure extracting in Fig. 2 (a);
Fig. 3 (a) is the signature of geometric templates on Primal sketch;
Fig. 3 (b) is the geometric templates schematic diagram designed to line segment l;
Fig. 4 be the present invention to structural region calculate adjust parameter process ( a line segment on Primal Sketch figure), wherein left figure is the high-resolution multi-spectral image schematic diagram of prediction, right figure is High-resolution Panchromatic Images schematic diagram;
Fig. 5 (a) is low resolution multispectral image;
Fig. 5 (b) is the High-resolution Panchromatic Images that Fig. 5 (a) is corresponding;
Fig. 5 (c) is low resolution multispectral image;
Fig. 5 (d) is the High-resolution Panchromatic Images that Fig. 5 (c) is corresponding;
Fig. 6 (a) is first group of tradition HIS fusion results figure;
Fig. 6 (b) is first group of adaptive H IS fusion results figure;
Fig. 6 (c) is first group of PCA fusion results figure;
Fig. 6 (d) is first group of SparseFI fusion results figure;
Fig. 6 (e) is first group of P+XS fusion results figure;
Fig. 6 (f) is first group of fusion results figure of the present invention;
Fig. 7 (a) is second group of tradition HIS fusion results figure;
Fig. 7 (b) is second group of adaptive H IS fusion results figure;
Fig. 7 (c) is second group of PCA fusion results figure;
Fig. 7 (d) is second group of SparseFI fusion results figure;
Fig. 7 (e) is second group of P+XS fusion results figure;
Fig. 7 (f) is second group of fusion results figure of second group of the present invention.
Embodiment
As shown in Figure 1, a kind of multispectral and panchromatic image fusion method step based on region division and interpolation provided by the invention is as follows:
Step 1, extract the Primal Sketch figure of High-resolution Panchromatic Images, the extraction of the Primal Sketch figure wherein mentioning, was published in the article " Primal Sketch:Integrating Texture and Structure " on " Computer Vision and Image Understanding " magazine referring to people such as Cheng-en Guo in 2007; .
As shown in the figure, Fig. 2 (a) is the former figure of High-resolution Panchromatic Images, Fig. 2 (b) is the Primal Sketch sketch map of extracting, this figure is different from the image that traditional edge detection method detects, in Primal Sketch figure, each in figure line segment all has directional information, has all comprised the limit in image, line structure.
Step 2, according to the direction of Primal Sketch figure middle conductor in step 1, centered by the point on line segment, along the direction of this line segment, as shown in Fig. 3 (b), constructs how much masterplates of 7 × 7.
Because the description for limit, line structure in Primal Sketch figure is all represented by a pixel, and the edge of image has certain continuity, generally described by 3 to 5 pixels, so by how much masterplates of design, architectural feature that can Description Image, Fig. 3 (a) is illustrated in how much masterplates that show design in Primal Sketch sketch map, and Fig. 3 (b) is the schematic diagram of how much masterplates, and wherein l represents the direction of line segment in Primal Sketch sketch map.
Step 3, according to how much masterplates of design in step 2, every bit on Primal Sketch figure middle conductor is all applied to this masterplate, thereby obtain the structural region mapping graph of High-resolution Panchromatic Images, as shown in Figure 2 (c), the structural region in the mapping graph of region is mapped as former figure by Fig. 2 (d), and other region representations are black.。
Step 4, by the region except structural region in step 3, according to the variance statistical property of image, is divided into texture region and smooth domain, thus obtain dividing High-resolution Panchromatic Images be structural region, texture region and smooth domain remove territory mapping graph.
In image, the variance yields of smooth domain is all smaller, and the variance yields of texture and structural region is all larger, thus by image being carried out to variance statistics, and adopt threshold value T to cut apart, just the region beyond structural region in step 3 can be divided into texture and smooth domain.The threshold value T that the present invention adopts equals 15.
Step 5, adopts arest neighbors interpolation method to carry out interpolation to low resolution multispectral image, obtains initial multispectral image, and initial multispectral image is identical with the size of High-resolution Panchromatic Images.
Arest neighbors Interpolation Process is:
f HR(2i+x,2j+y)=f LR(i,j)(i=1,...,W,j=1,...,H,x=0,1,y=0,1)(1)
Wherein f hRinitial multispectral image, f lRbe low resolution multispectral image, W and H represent respectively the wide and high of low resolution multispectral image.
Wherein f hR(2i, 2j) is called definite pixel, and these gray-scale values no longer change in follow-up step, and rest of pixels is called uncertain pixel, needs to carry out study and adjustment again in the step below.
Step 6, to the initial multispectral image in step 5, the region mapping graph that uses step 4 to obtain, is divided into structural region, texture region and smooth domain by initial multispectral image.
Step 7, each sub-band images to initial multispectral image respectively, adjusts the uncertain pixel of position in (2i+1,2j+1) in initial multispectral image, and computing formula is:
f HR ( 2 i + 1,2 j + 1 ) = Σ k = 0 1 Σ l = 0 1 α 2 k + l f HR ( 2 ( i + k ) , 2 ( j + l ) ) + δ ( 2 i + 1,2 j + 1 ) - - - ( 2 )
Wherein be the weighting parameter of interpolation, δ (2i+1,2j+1) is the adjustment parameter under High-resolution Panchromatic Images instructs; The calculation procedure of weighting parameter and adjustment parameter is as follows:
1) determine weighting parameter concrete calculation procedure is as follows:
A) to each the uncertain pixel that need to readjust in initial multispectral image, judge whether this pixel is arranged in the smooth domain of step 6 region mapping graph, if so, weighting parameter carry out step c), otherwise, carry out step b).
B), in the time that the pixel of step in a) is positioned at structure or texture region, the present invention uses NEDI interpolation method to calculate weighting parameter the concrete operations of the method are: centered by this pixel, get the rectangular window of w × h size, each in this window is determined to pixel y i, centered by it, get its four neighborhoods around and determine that line of pixels becomes row vector c i, by all y iform a column vector Y=[y 1..., y i... y m], by each c ias a line of matrix, composition Matrix C=[c 1..., c i... c m], wherein m is the number of all definite pixels in rectangular window, according to the matrix Y and the C that obtain, calculates weighting parameter α 2k+l, computing formula is:
α → = ( C T C ) - 1 ( C T Y )
Wherein by α 2k+lthe column vector of composition; In the present invention, get w=15, h=15.
The NEDI method of wherein mentioning, is published in the article " New wdge-directed interpolation " on " IEEE Transactions on Image Processing " magazine referring to people such as XLi in calendar year 2001;
C) according to the weighting parameter calculating the multispectral image that obtains prediction, computing formula is:
f ^ HR ( 2 i + 1,2 j + 1 ) = Σ k = 0 1 Σ l = 0 1 α 2 k + l f HR ( 2 ( i + k ) , 2 ( j + l ) ) - - - ( 3 )
2) determine and adjust parameter δ (2i+1,2j+1), there is the feature of highly similar structural information according to High-resolution Panchromatic Images and fused images, the predicted value of (2i+1,2j+1) point on the multispectral image of the prediction that step 1) is obtained add and adjust parameter δ (2i+1,2j+1), make this Grad in x direction and y direction respectively with the x direction of High-resolution Panchromatic Images corresponding point and the difference minimum of y direction gradient value, by solving minimization problem, adjust parameter δ (2i+1 thereby calculate, 2j+1), as shown in Figure 4, concrete calculation procedure is as follows for adjustment process:
(1) calculate respectively the x direction of (2i+1,2j+1) point on the multispectral image of predicting and the gradient function f of y direction x(δ), f y(δ), in the time of the structural region of point (2i+1,2j+1) in step 6, x direction is the direction of the structural region middle conductor at this place, and y direction is the direction vertical with line segment direction; In the time that this point is positioned at texture and structural region, x direction is horizontal direction, and y direction is vertical direction, f x(δ), f y(δ) computing formula is as follows:
f x ( δ ) = g x 1 + g x 2 - 2 ( f ^ HR ( 2 i + 1,2 j + 1 ) + δ ( 2 i + 1,2 j + 1 ) ) - - - ( 4 )
f y ( δ ) = g y 1 + g y 2 - 2 ( f ^ HR ( 2 i + 1,2 j + 1 ) + δ ( 2 i + 1,2 j + 1 ) ) - - - ( 5 )
Wherein g x1and g x2be respectively centered by (2i+1,2j+1) point, along x direction, put recently the gray-scale value of fixation element really, g with (2i+1,2j+1) y1and g y2be to be respectively centered by (2i+1,2j+1) point, along y direction, put recently the gray-scale value of fixation element really with (2i+1,2j+1).
(2) calculate respectively the x direction of (2i+1,2j+1) point on High-resolution Panchromatic Images and the Grad of y direction computing formula is as follows:
f ^ x = g ^ x 1 + g ^ x 2 - 2 g ^ - - - ( 6 )
f ^ y = g ^ y 1 + g ^ y 2 - 2 g ^ - - - ( 7 )
Wherein for the gray-scale value of point (2i+1,2j+1) on High-resolution Panchromatic Images, with be respectively on High-resolution Panchromatic Images position and g x1, g x2, g y1and g y2identical gray-scale value.
(3), according to step (2) and (3), calculate and adjust parameter δ (2i+1,2j+1) by formula (8).
min δ { ( f x ( δ ) - λ 1 f ^ x ) 2 + ( f y ( δ ) - λ 2 f ^ y ) 2 } - - - ( 8 )
The calculating of formula (8) can be calculated extreme point by first derivation and be obtained.Intensive parameter λ in formula (8) 1and λ 2control the intensity of adjusting, computing formula is as shown in formula (9):
λ 1 = ( g x 1 - g x 2 ) / ( g ^ x 1 - g ^ x 2 ) , λ 2 = ( g y 1 - g y 2 ) / ( g ^ y 1 - g ^ y 2 ) - - - ( 9 )
In formula (9), when time, λ 1=1; time, λ 2=1;
3) according to step 1) and step 2) the weighting parameter α that obtains 2k+land δ (2i+1,2j+1), according to formula (2), calculate in initial multispectral image and be positioned at (2i+1, the modified value of the gray-scale value of the point of 2j+1) locating, this modified value is replaced to the gray-scale value that is positioned at the point that (2i+1,2j+1) locate in initial multispectral image, obtain the multispectral image of revising.
Step 8, (2i+1 will be positioned in step 7, the pixel of 2j+1) locating is set to definite pixel, on the multispectral image basis of the correction obtaining in step 7, each sub-band images to the multispectral image of revising respectively, in the multispectral image that adjustment is revised, position is in (2i, 2j+1) and (2i+1, uncertain pixel 2j), computing formula is:
f HR ( 2 i , 2 j + 1 ) = Σ k = [ - 2,0,0,2 ] l = [ 0 , - 2,2,0 ] α k , l f HR ( 2 i + k , 2 j + 1 + l ) ) + δ ( 2 i , 2 j + 1 ) - - - ( 10 )
f HR ( 2 i + 1 , 2 j ) = Σ k = [ - 2,0,0,2 ] l = [ 0 , - 2,2,0 ] α k , l f HR ( 2 i + 1 + k , 2 j + l ) ) + δ ( 2 i + 1 , 2 j ) - - - ( 11 )
Wherein i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image, α k,lit is the weighting parameter of interpolation, δ is the adjustment parameter under High-resolution Panchromatic Images instructs, the calculation procedure of weighting parameter and adjustment parameter is identical with step 7, thereby be positioned at (2i in the multispectral image that obtains revising, 2j+1) or (2i+1, the modified value of gray-scale value 2j), this modified value is replaced in the multispectral image of revising and is positioned at (2i, 2j+1) or (2i+1, the gray-scale value of the point of 2j) locating, complete the calculating to each gray-scale value on initial multispectral image, obtain the multispectral image of new correction.
Step 9, if the size of High-resolution Panchromatic Images is 2 times of low resolution multispectral image size, the image obtaining in step 8 is fused images; If 4 times, need step 1-8 to carry out twice, in operational process for the first time, High-resolution Panchromatic Images in step 1 is that the down-sampling of real High-resolution Panchromatic Images dwindles the image of a times, executing for the first time after process, the low resolution multispectral image in using the image obtaining in step 8 as implementation for the second time, High-resolution Panchromatic Images is real High-resolution Panchromatic Images, again perform step 1-8, obtain final fused images.
Advantage of the present invention is further illustrated by condition, content and the result of following emulation:
1. simulated conditions
The present invention adopts two groups of QuickBird satellite images in l-G simulation test, wherein the resolution of High-resolution Panchromatic Images is 0.6m, the resolution of low resolution multispectral image is 2.4m, and two groups of images of emulation are as shown in Fig. 5 (a)-5 (b) and 5 (c)-5 (d).In simulation process, the size of how much masterplates is 7 × 7.
2. emulation content and result
The simulation experiment result as shown in Fig. 6 (a)-6 (f) and Fig. 7 (a)-7 (f), the partial enlarged drawing that in Fig. 6 and Fig. 7, the lower right corner of image is experimental result.In test, adopt traditional HIS, adaptive H IS, PCA, SparseFI, P+XS and the simulation experiment result of the present invention to compare, adaptive H IS method wherein, was published in the article " An Adaptive HIS Pan-sharpening Method " on " IEEE Transactions on Geoscience and Remote Sensing Letters " magazine referring to people such as S.Rahmani in 2010; SparseFI method, was published in the article " A Sparse Image Fusion Algorithm with Application to Pan-Sharpening " on " IEEE Transactions on Geoscience and Remote Sensing " magazine referring to people such as X.X.Zhu in 2013; P+XS method, was published in the article " Avariational model for P+XS image fusion " on " International Journal of Computer Vision " magazine referring to people such as C.Ballester in 2006;
The present invention is with respect to other fusion method, and fused images not only has good spectral information, has higher spatial resolution simultaneously.
More than exemplifying is only to illustrate of the present invention, does not form the restriction to protection scope of the present invention, within the every and same or analogous design of the present invention all belongs to protection scope of the present invention.

Claims (3)

1. the multispectral and panchromatic image fusion method based on region division and interpolation, is characterized in that: comprise following steps:
Step 1: the Primal Sketch figure that extracts High-resolution Panchromatic Images;
Step 2: the direction of the Primal Sketch figure middle conductor obtaining according to step 1, centered by the point on line segment, the direction window that is 7 × 7 along the direction designed size of this line segment, constructive geometry masterplate;
Step 3: to the Primal Sketch figure obtaining in step 1, according to the geometric templates of structure, obtain dividing the region mapping graph that High-resolution Panchromatic Images is structural region and non-structural region;
Step 4: by the non-structural region obtaining in step 3, according to the variance statistical property of image, non-structural region is divided into texture region and smooth domain, thereby obtains the region mapping graph of partition structure region, texture region and smooth domain;
Step 5: low resolution multispectral image is adopted to arest neighbors interpolation method, obtain the initial multispectral image of image size and High-resolution Panchromatic Images formed objects, its computing formula is:
f HR(2i+m,2j+l)=f LR(i,j)(i=1,...,W,j=1,...,H,m=0,1,l=0,1)(1)
Wherein W represents the wide of low resolution multispectral image, and H represents the height of low resolution multispectral image, f lRrepresent low resolution multispectral image, f hRrepresent initial multispectral image;
Step 6: according to the region mapping graph obtaining in step 4, initial multispectral image is divided into structural region, texture region and smooth domain;
Step 7: the initial multispectral image that step 5 is obtained; respectively to its each sub-band images; adjust the gray-scale value of position in (2i+1,2j+1), for structure, texture region and smooth domain; adopt different interpolation and adjustment process to (2i+1; gray-scale value 2j+1) calculates, and obtains the multispectral image of correction, wherein i=1; ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image;
Step 8: on the basis of the multispectral image of the correction obtaining in step 7; respectively to its each sub-band images; adjust position in (2i; 2j+1) with (2i+1; gray-scale value 2j); for structure, texture region and smooth domain; adopt different interpolation and adjustment process to (2i; 2j+1) re-start calculating with (2i+1,2j) gray-scale value, complete the calculating of the gray-scale value to each pixel on initial multispectral image; obtain the multispectral image of new correction; wherein i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image;
Step 9: if the size of real High-resolution Panchromatic Images is 2 times of low resolution multispectral image size, the multispectral image that obtains new prediction in step 8 is fused images, if the size of real High-resolution Panchromatic Images is 4 times of real low resolution multispectral image size, need step 1-8 to carry out twice, in operational process for the first time, High-resolution Panchromatic Images in step 1 is that the down-sampling of real High-resolution Panchromatic Images dwindles the image of a times, executing for the first time after process, low resolution multispectral image in using the multispectral image obtaining in step 8 as implementation for the second time, High-resolution Panchromatic Images is real High-resolution Panchromatic Images, again perform step 1-8, obtain final fused images.
2. a kind of multispectral and panchromatic image fusion method based on region division and interpolation according to claim 1, is characterized in that: the multispectral image that obtaining described in step 7 revised, carries out in accordance with the following steps:
1) each sub-band images to initial multispectral image respectively, adjusts the pixel of position in (2i+1,2j+1) in initial multispectral image, and computing formula is:
f HR ( 2 i + 1,2 j + 1 ) = Σ k = 0 1 Σ l = 0 1 α 2 k + l f HR ( 2 ( i + k ) , 2 ( j + l ) ) + δ ( 2 i + 1,2 j + 1 ) - - - ( 2 )
Wherein f hR(2 (i+k), 2 (j+l)) represent the gray-scale value that in initial multispectral image, position is located in (2 (i+k), 2 (j+l)), α 2k+lbe the weighting parameter of study interpolation, δ (2i+1,2j+1) is the adjustment parameter under High-resolution Panchromatic Images instructs, i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image;
2) calculation procedure 1) in weighting parameter α 2k+l, to each pixel in (2i+1,2j+1) that need to recalculate in initial multispectral image, judge whether this pixel is positioned at the smooth domain of region mapping graph, if so, weighting parameter complete the calculating of weighting parameter, otherwise, use NEDI interpolation method to calculate weighting parameter α → = [ α 0 , α 1 , α 2 , α 3 ] ;
3) according to step 2), calculate the multispectral image of prediction, computing formula is:
f ^ HR ( 2 i + 1,2 j + 1 ) = Σ k = 0 1 Σ l = 0 1 α 2 k + l f HR ( 2 ( i + k ) , 2 ( j + l ) ) - - - ( 3 )
Wherein the gray-scale value that in the multispectral image that represents to predict, position is located in (2i+1,2j+1), i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image;
4) calculation procedure 1) in adjustment parameter δ (2i+1,2j+1), multispectral image (2i+1, the 2j+1) point of the prediction that step 3) is obtained, calculates respectively the gradient function f of x direction and y direction x(δ), f y(δ), when point (2i+1,2j+1) is during in structural region, x direction is the direction of the structural region middle conductor at this place, and y direction is the direction vertical with line segment direction; In the time that this point is positioned at texture and structural region, x direction is horizontal direction, and y direction is vertical direction, f x(δ), f y(δ) computing formula is as follows:
f x ( δ ) = g x 1 + g x 2 - 2 ( f ^ HR ( 2 i + 1,2 j + 1 ) + δ ( 2 i + 1,2 j + 1 ) ) - - - ( 4 )
f y ( δ ) = g y 1 + g y 2 - 2 ( f ^ HR ( 2 i + 1,2 j + 1 ) + δ ( 2 i + 1,2 j + 1 ) ) - - - ( 5 )
Wherein the gray-scale value that in the multispectral image that represents to predict, position is located in (2i+1,2j+1), i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image, g x1and g x2be respectively centered by (2i+1,2j+1) point, along x direction, put recently the gray-scale value of fixation element really, g with (2i+1,2j+1) y1and g y2be to be respectively centered by (2i+1,2j+1) point, along y direction, put recently the gray-scale value of fixation element really with (2i+1,2j+1);
5) calculate respectively the x direction of (2i+1,2j+1) point on High-resolution Panchromatic Images and the Grad of y direction computing formula is as follows:
f ^ x = g ^ x 1 + g ^ x 2 - 2 g ^ - - - ( 6 )
f ^ y = g ^ y 1 + g ^ y 2 - 2 g ^ - - - ( 7 )
Wherein for the gray-scale value of point (2i+1,2j+1) on High-resolution Panchromatic Images, with be respectively on High-resolution Panchromatic Images position and g x1, g x2, g y1and g y2identical gray-scale value;
6) according to step 4) and 5), solve and adjust parameter δ (2i+1,2j+1) by formula (8):
min δ { ( f x ( δ ) - λ 1 f ^ x ) 2 + ( f y ( δ ) - λ 2 f ^ y ) 2 } - - - ( 8 )
Formula (G) first derivation is calculated to extreme point and be adjusted parameter δ (2i+1,2j+1); Intensive parameter λ in formula (8) 1and λ 2control the intensity of adjusting, computing formula is as shown in formula (9):
λ 1 = ( g x 1 - g x 2 ) / ( g ^ x 1 - g ^ x 2 ) , λ 2 = ( g y 1 - g y 2 ) / ( g ^ y 1 - g ^ y 2 ) - - - ( 9 )
In formula (9), when time, λ 1=1; time, λ 2=1;
7) according to step 2) and the weighting parameter that obtains of step 6) with adjustment parameter δ (2i+1,2j+1), according to formula (1), calculate in initial multispectral image and be positioned at (2i+1, the modified value of the gray-scale value of the point of 2j+1) locating, this modified value is replaced to the gray-scale value that is positioned at the point that (2i+1,2j+1) locate in initial multispectral image, obtain the multispectral image of revising.
3. a kind of multispectral and panchromatic image fusion method based on region division and interpolation according to claim 1, is characterized in that: the multispectral image that obtains new correction in step 8, carries out in accordance with the following steps:
(1) respectively to each sub-band images of multispectral image of revising, adjust the uncertain pixel of position in (2i, 2j+1) and (2i+1,2j) in the multispectral image of revising, computing formula is:
f HR ( 2 i , 2 j + 1 ) = Σ k = [ - 2,0,0,2 ] l = [ 0 , - 2,2,0 ] α k , l f HR ( 2 i + k , 2 j + 1 + l ) ) + δ ( 2 i , 2 j + 1 ) - - - ( 10 )
f HR ( 2 i + 1 , 2 j ) = Σ k = [ - 2,0,0,2 ] l = [ 0 , - 2,2,0 ] α k , l f HR ( 2 i + 1 + k , 2 j + l ) ) + δ ( 2 i + 1 , 2 j ) - - - ( 11 )
Wherein i=1 ..., W; J=1 ..., H, W represents the wide of low resolution multispectral image, H represents the height of low resolution multispectral image, α k,lbe the weighting parameter of interpolation, δ is the adjustment parameter under High-resolution Panchromatic Images instructs.Adjust parameter, weighting parameter and intensive parameter λ 1and λ 2calculation procedure identical with claim 2, thereby be positioned at (2i in the multispectral image that obtains revising, 2j+1) or (2i+1, the modified value of gray-scale value 2j), this modified value is replaced to the gray-scale value that is positioned at the point that (2i, 2j+1) or (2i+1,2j) locate in the multispectral image of revising, complete the calculating to each gray-scale value on initial multispectral image, obtain the multispectral image of new correction.
CN201410077181.8A 2014-03-04 2014-03-04 A kind of based on region division and the multispectral and panchromatic image fusion method of interpolation Active CN103914817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410077181.8A CN103914817B (en) 2014-03-04 2014-03-04 A kind of based on region division and the multispectral and panchromatic image fusion method of interpolation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410077181.8A CN103914817B (en) 2014-03-04 2014-03-04 A kind of based on region division and the multispectral and panchromatic image fusion method of interpolation

Publications (2)

Publication Number Publication Date
CN103914817A true CN103914817A (en) 2014-07-09
CN103914817B CN103914817B (en) 2017-01-04

Family

ID=51040474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410077181.8A Active CN103914817B (en) 2014-03-04 2014-03-04 A kind of based on region division and the multispectral and panchromatic image fusion method of interpolation

Country Status (1)

Country Link
CN (1) CN103914817B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794681A (en) * 2015-04-28 2015-07-22 西安电子科技大学 Remote sensing image fusion method based on multi-redundancy dictionary and sparse reconstruction
CN104867124A (en) * 2015-06-02 2015-08-26 西安电子科技大学 Multispectral image and full-color image fusion method based on dual sparse non-negative matrix factorization
CN107392208A (en) * 2017-05-23 2017-11-24 三亚中科遥感研究所 Object Spectra feature extracting method with purifying is mapped based on spectral space
CN107710276A (en) * 2015-09-30 2018-02-16 高途乐公司 The unified image processing of the combination image in the region based on spatially co-located

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117482A (en) * 2011-04-13 2011-07-06 西安电子科技大学 Non-local mean image denoising method combined with structure information
US20110235939A1 (en) * 2010-03-23 2011-09-29 Raytheon Company System and Method for Enhancing Registered Images Using Edge Overlays
CN103093433A (en) * 2013-01-25 2013-05-08 西安电子科技大学 Natural image denoising method based on regionalism and dictionary learning
CN103198463A (en) * 2013-04-07 2013-07-10 北京航空航天大学 Spectrum image panchromatic sharpening method based on fusion of whole structure and space detail information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110235939A1 (en) * 2010-03-23 2011-09-29 Raytheon Company System and Method for Enhancing Registered Images Using Edge Overlays
CN102117482A (en) * 2011-04-13 2011-07-06 西安电子科技大学 Non-local mean image denoising method combined with structure information
CN103093433A (en) * 2013-01-25 2013-05-08 西安电子科技大学 Natural image denoising method based on regionalism and dictionary learning
CN103198463A (en) * 2013-04-07 2013-07-10 北京航空航天大学 Spectrum image panchromatic sharpening method based on fusion of whole structure and space detail information

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794681A (en) * 2015-04-28 2015-07-22 西安电子科技大学 Remote sensing image fusion method based on multi-redundancy dictionary and sparse reconstruction
CN104794681B (en) * 2015-04-28 2018-03-13 西安电子科技大学 Remote sensing image fusion method based on more redundant dictionaries and sparse reconstruct
CN104867124A (en) * 2015-06-02 2015-08-26 西安电子科技大学 Multispectral image and full-color image fusion method based on dual sparse non-negative matrix factorization
CN104867124B (en) * 2015-06-02 2017-10-24 西安电子科技大学 Multispectral and panchromatic image fusion method based on the sparse Non-negative Matrix Factorization of antithesis
CN107710276A (en) * 2015-09-30 2018-02-16 高途乐公司 The unified image processing of the combination image in the region based on spatially co-located
CN107710276B (en) * 2015-09-30 2022-05-13 高途乐公司 Method, system, and non-transitory computer readable medium for image processing
CN107392208A (en) * 2017-05-23 2017-11-24 三亚中科遥感研究所 Object Spectra feature extracting method with purifying is mapped based on spectral space
CN107392208B (en) * 2017-05-23 2020-05-22 三亚中科遥感研究所 Object spectral feature extraction method based on spectral space mapping and purification

Also Published As

Publication number Publication date
CN103914817B (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN105931295B (en) A kind of geologic map Extracting Thematic Information method
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
CN101615290B (en) Face image super-resolution reconstructing method based on canonical correlation analysis
CN106127204A (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
CN103258324B (en) Based on the method for detecting change of remote sensing image that controlled kernel regression and super-pixel are split
CN109871823B (en) Satellite image ship detection method combining rotating frame and context information
Zheng et al. Large-scale oil palm tree detection from high-resolution remote sensing images using faster-rcnn
CN102629378B (en) Remote sensing image change detection method based on multi-feature fusion
Chen et al. Convolutional neural network based dem super resolution
CN103020939B (en) Method for removing large-area thick clouds for optical remote sensing images through multi-temporal data
CN106952288A (en) Based on convolution feature and global search detect it is long when block robust tracking method
CN105869178A (en) Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization
CN105139395A (en) SAR image segmentation method based on wavelet pooling convolutional neural networks
CN104102904B (en) A kind of static gesture identification method
CN103559506B (en) Sub-pixel drawing method based on vector boundaries
CN104361590A (en) High-resolution remote sensing image registration method with control points distributed in adaptive manner
CN102831598A (en) Remote sensing image change detecting method with combination of multi-resolution NMF (non-negative matrix factorization) and Treelet
CN104794681B (en) Remote sensing image fusion method based on more redundant dictionaries and sparse reconstruct
CN103914817B (en) A kind of based on region division and the multispectral and panchromatic image fusion method of interpolation
CN103971338A (en) Variable-block image repair method based on saliency map
CN103871039A (en) Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection
US11238307B1 (en) System for performing change detection within a 3D geospatial model based upon semantic change detection using deep learning and related methods
US20220091259A1 (en) System using a priori terrain height data for interferometric synthetic aperture radar (ifsar) phase disambiguation and related methods
CN104751111A (en) Method and system for recognizing human action in video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant