CN103824263A - Remote sensing image sparseness estimation method based on mixing transformation - Google Patents

Remote sensing image sparseness estimation method based on mixing transformation Download PDF

Info

Publication number
CN103824263A
CN103824263A CN201410074775.3A CN201410074775A CN103824263A CN 103824263 A CN103824263 A CN 103824263A CN 201410074775 A CN201410074775 A CN 201410074775A CN 103824263 A CN103824263 A CN 103824263A
Authority
CN
China
Prior art keywords
mrow
image
transformation
remote sensing
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410074775.3A
Other languages
Chinese (zh)
Other versions
CN103824263B (en
Inventor
石翠萍
张钧萍
张晔
陈浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hit Robot Group Co ltd
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201410074775.3A priority Critical patent/CN103824263B/en
Publication of CN103824263A publication Critical patent/CN103824263A/en
Application granted granted Critical
Publication of CN103824263B publication Critical patent/CN103824263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a remote sensing image sparseness estimation method based on mixing transformation, which belongs to the technical field of remote sensing image processing, and solves the problem that image detail information is lost and a good sparse effect of a remote sensing image with rich details is difficultly obtained as the majority of image energy is mainly retained in an existing sparseness estimation method. The method combines the advantages of smooth image presentation based on tensor product wavelet transform and the characteristic of effective expression of detail information such as texture and edge of Tetrolet transform, has no limitation on image characteristics, and has certain universality. An experimental result shows that in comparison with a simple remote sensing image sparseness estimation method, the method based on the mixing transformation can be used for effectively performing sparse representation on the remote sensing image. The method is specially suitable for sparseness treatment on remote sensing images.

Description

Remote sensing image sparse estimation method based on hybrid transformation
Technical Field
The invention relates to a remote sensing image sparse estimation method, and belongs to the technical field of remote sensing image processing.
Background
Most of the existing image sparse estimation methods can achieve the best performance when the image has a certain specific characteristic. For example, tensor product wavelet transform can perform optimal sparse representation on smooth images, but cannot perform better description on geometric features in the images. The directional wavelet can better describe some detailed features of the image, for example, the directionlet can better represent crossed lines in the image, and the wedgelet can effectively detect lines, curved surfaces and the like in the image. But the directional wavelet is not as capable of representing a smooth image as the tensor product wavelet transform. Therefore, in order to perform efficient sparse representation on an image, we need to first determine the characteristics of the image. However, whether an image is smooth or detailed is difficult to judge, since it is a relative concept, an empirically chosen sparse estimation method may not be suitable.
In addition, for the remote sensing image, if the sparse method aiming at the common image is adopted, better sparse performance is difficult to obtain. This is because the remote sensing image has its own unique features compared to a general natural image. The remote sensing image usually contains a large amount of ground feature information, so that the detail information of the remote sensing image is generally rich, and the details comprise geometric information, edges, textures and the like, and even the outlines of some small targets. Most of the existing sparse methods aim at reserving most of energy of images, image detail information is lost, and the remote sensing images with rich details are difficult to obtain a good sparse effect, so that large errors can be generated when the sparse remote sensing images are applied, such as small target identification and feature extraction.
Disclosure of Invention
The invention provides a remote sensing image sparse estimation method based on hybrid transformation, aiming at solving the problems that most of energy of an image can only be reserved by the existing sparse method, image detail information is easy to lose, and a good sparse effect is difficult to obtain for a remote sensing image with rich details.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the invention relates to a remote sensing image sparse estimation method based on hybrid transformation, which is realized according to the following steps:
step one, carrying out sparse estimation on the low frequency of a remote sensing image:
performing tensor product wavelet transformation on an original image; step one (two), performing multi-phase decomposition on each sub-band by adopting a p-fold decimation filter; step one (three), carrying out principal component transformation on the decomposed components; step one (four), reserving N for the transformed image1The larger transform coefficients, the other coefficients set to 0; obtaining a sparse estimation result, and carrying out inverse transformation on the result;
step two, carrying out sparse estimation on the high frequency of the remote sensing image:
subtracting the original image from the low-frequency image obtained in the first step to obtain a high-frequency image containing most textures and edges; step two, transforming the high-frequency image by using Tetrolet transformation, and adopting the transformed high-frequency sub-bandThe self-adaptive decomposition method further carries out energy aggregation; step two (three), reserving N for the transformed image2The larger transform coefficients, the other coefficients set to 0; step two (four), according to the process of the self-adaptive decomposition, carry on the corresponding inverse transformation to the high-frequency subband that has carried on the self-adaptive decomposition, and then carry on the inverse transformation to the transformation image of the whole high-frequency image, receive the result of sparse estimation;
and step three, overlapping the results of the sparse estimation in the step one and the step two to obtain a final estimation image.
The invention has the beneficial effects that:
the method is specially suitable for sparse processing of the remote sensing image, so that the image detail information can be kept as much as possible on the premise that most energy of the image can be kept. The method of the invention provides a mixed transformation method based on Tetrolet transformation to sparsely remote sense images, combines the advantages of tensor product wavelet transformation to express smooth images and the characteristic that Tetrolet transformation can effectively express detailed information such as texture and edge, and has no limit to the self characteristics of the images and certain universality. Meanwhile, in order to improve the sparse effect on the remote sensing image, two decomposition processes are specially designed, so that the image detail information can be kept as much as possible while the energy aggregation is enhanced. Experimental results show that compared with the method for sparsely expressing the remote sensing image by adopting a single method, the method based on the hybrid transformation can more effectively sparsely express the remote sensing image.
By using the method, the detail information of the reserved image can reach 10-15% on the premise of reserving 95% of energy of the remote sensing image, and the detail information can be better reserved while most of the energy of the image is reserved by using the method to process the remote sensing image with rich details. Compared with the method for processing the remote sensing image by using the sparse method of the common image, the method disclosed by the invention can be used for better sparse remote sensing image under the same comparison condition, and has certain universality.
Drawings
FIG. 1 is a general framework schematic of the process of the present invention; fig. 2 is a partial enlarged view of PSNR and reconstructed images obtained by different methods, in fig. 2: (a) (b) - (c)9/7, PSNR =25.09, a result of (d) - (e) a Tetrolet transform, PSNR =25.49, a result of the hybrid method proposed (f) - (g), PSNR =26.32, (h) - (i) a result of a Contourlet transform, PSNR =24.18, (j) - (k) a result of a Curvelet transform, PSNR = 22.38; fig. 3 is two remote sensing images for testing, fig. 4 is a result of some objective index results comparing (a) the result of RMSE, (b) the result of MAE, (c) the result of MAD under conditions of N1: N2=4:1 and N1: N2=3:1, and fig. 5 is a diagram of a polyphase decomposition process of wavelet subbands.
Detailed Description
Based on the general framework diagram of fig. 1, for a given remote sensing image, the thinning can be performed in two stages. The first stage is used for thinning the low frequency of the remote sensing image, and the second stage is used for thinning the detail image. Two stage embodiments are given below.
The first embodiment is as follows: the remote sensing image sparse estimation method based on the hybrid transformation, which is described in the embodiment, is realized according to the following steps:
step one, carrying out sparse estimation on the low frequency of a remote sensing image:
performing tensor product wavelet transformation on an original image; step one (two), performing multi-phase decomposition on each sub-band by adopting a p-fold decimation filter; step one (three), carrying out principal component transformation on the decomposed components; step one (four), reserving N for the transformed image1The larger transform coefficients, the other coefficients set to 0; obtaining a sparse estimation result, and carrying out inverse transformation on the result;
step two, carrying out sparse estimation on the high frequency of the remote sensing image:
subtracting the original image from the low-frequency image obtained in the first step to obtain a high-frequency image containing most textures and edges; step two, transforming the high-frequency image by using Tetrolet transformation, and further performing energy aggregation on the transformed high-frequency sub-band by using a self-adaptive decomposition method; step two (three), reserving N for the transformed image2The larger transform coefficients, the other coefficients set to 0; step two (four), according to the process of the self-adaptive decomposition, carry on the corresponding inverse transformation to the high-frequency subband that has carried on the self-adaptive decomposition, and then carry on the inverse transformation to the transformation image of the whole high-frequency image, receive the result of sparse estimation;
and step three, overlapping the results of the sparse estimation in the step one and the step two to obtain a final estimation image.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: the sparse process of the low-frequency part of the remote sensing image is as follows: let the original image be
Figure BDA0000472136830000031
The specific implementation process of the step one (I) is as follows: let A denote two-dimensional tensor product wavelet transform and X denote XA as the result of X after tensor product wavelet transformTThen the transformed image is represented as:
XAT:=[(XAT)(1),(XAT)(2),…,(XAT)(N)]
wherein, "=" means "defined as", N means the number of wavelet sub-bands;
the specific implementation process of the step one (two) is as follows: for each sub-band (XA)T)(i)Generating a transform component using a p-fold filter, wherein i =1, …, N; the process is represented as:
Figure BDA0000472136830000032
symbol'
Figure BDA0000472136830000033
"represents the variables before the symbol after being processed, and can be represented as the variables after the symbol; permu represents the decomposition of sub-band and the recombination process of wavelet coefficient, firstly adopting p-fold filter to extract the coefficient for given wavelet sub-band, then recombining the extracted coefficient into transformation component according to the corresponding position;
the specific implementation process of the step one (third) is as follows: and performing principal component transformation on the transformation component generated by each wavelet sub-band: let B(i)A transform matrix representing the ith sub-band, then
Figure BDA0000472136830000049
Wherein i =1, …, N;
if the transformed image is Y, then
Y:=[B(1)permu(XAT)(1),B(2)permu(XAT)(2),…,B(N)permu(XAT)(N)]
The specific implementation process of the step one (four) is that for all the coefficients in Y, the larger N is reserved1One coefficient, and the other coefficient is set to 0, and the transformed image at this time can be expressed as
Figure BDA0000472136830000041
Then
Figure BDA0000472136830000042
Other steps are the same as in the first embodiment.
And performing wavelet transformation on the image. In order to eliminate the correlation of the coefficients in the sub-bands, a p-fold decimation filter is used to perform a polyphase decomposition on each sub-band, and the decomposed components are processedAnd performing principal component transformation. Thus, the image is equivalent to two times of decomposition, so that the energy is more concentrated and the sparsity is stronger. Finally, N is reserved for the transformation image with high energy concentration1And setting the other coefficients to be 0, namely performing sparse estimation. And according to the result of sparse estimation, performing inverse transformation on the process to obtain the approximation of the low-frequency image of the original image.
The third concrete implementation mode: the present embodiment differs from the first or second embodiment in that: after the step one (four) is finished, the step one (five) is executed: the process of estimating the low-frequency part of the remote sensing image is carried out according to the following steps:
step 1, applying principal component inverse transformation to each transformation component sequence
Figure BDA0000472136830000043
Where i =1,2, …, N, the process is represented as:
[ B ( 1 ) - 1 Y ~ ( 1 ) , B ( 2 ) - 1 Y ~ ( 2 ) , . . . , B ( N ) - 1 Y ~ ( N ) ] ;
step 2, based on the process of the step 1, recombining the inversely transformed component sequence, and marking as permu-1Then inverse two-dimensional tensor product wavelet transform A is applied-1Let the estimation result of the low frequency image be
Figure BDA00004721368300000410
Then
X ~ L : = [ permu - 1 ( B ( 1 ) - 1 Y ~ ( 1 ) ) , permu - 1 ( B ( 2 ) - 1 Y ~ ( 2 ) ) , . . . , permu - 1 ( B ( N ) - 1 Y ~ ( N ) ) ] A - 1 . The other steps are the same as in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment mode and one of the first to third embodiment modes is: in the second step, the specific process of the sparsity of the high-frequency part of the remote sensing image is as follows:
the specific implementation process of the step two (one) comprises the following steps: from the original image X, and the estimation result of the low-frequency imageGenerating a high frequency image, denoted XDThe high frequency image is obtained according to the following formula:
Figure BDA0000472136830000047
the specific implementation process of the second step (II) is as follows: for high frequency chartImage
Figure BDA0000472136830000048
If J is the number of layers of Tetrolet transform decomposition, the number of the layers is the r-th layer, wherein r = 1. The Tetrolet decomposition is performed as follows:
(1) image processing method
Figure BDA0000472136830000051
Divided into 4 gamma 4 blocks Qi,j,i=0,1,...,M1/2r+1-1,j=0,1,...,M2/2r+1-1; by Qi,jRepresenting the divided blocks, wherein i, j represents that the block is in the ith row and the jth column in all the blocks;
(2) for each block, in 4 subsets of four-grid tiles
Figure BDA0000472136830000052
c =1,.. logue, 117, s =0,1,2,3, a tensor product wavelet transform is performed to obtain corresponding 4 low-frequency coefficients
Figure BDA0000472136830000053
And 12 Tetrolet coefficients
Figure BDA0000472136830000054
X D r , ( c ) = ( x D r , ( c ) [ s ] ) s = 0 3 - - - ( 1 )
H l r , ( c ) = ( h l r , ( c ) [ s ] ) s = 0 3 - - - ( 2 )
Wherein,
<math> <mrow> <msubsup> <mi>x</mi> <mi>D</mi> <mrow> <mi>r</mi> <mo>,</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msubsup> <mi>I</mi> <mi>s</mi> <mi>C</mi> </msubsup> </mrow> </munder> <mi>&epsiv;</mi> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mi>L</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>]</mo> <msup> <mi>x</mi> <mrow> <mi>r</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>]</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>h</mi> <mi>l</mi> <mrow> <mi>r</mi> <mo>,</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msubsup> <mi>I</mi> <mi>s</mi> <mi>C</mi> </msubsup> </mrow> </munder> <mi>&epsiv;</mi> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mi>L</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>]</mo> <msup> <mi>x</mi> <mrow> <mi>r</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>]</mo> <mo>,</mo> <mi>l</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mn>3</mn> </mrow> </math>
wherein, ε [ l, m]L, m = 0.. 3 is derived from the tensor product wavelet transform matrixTo select the optimal direction c*Optimizing the 12 Tetrolet coefficients and the template selected when the coefficients are minimum;
<math> <mrow> <msup> <mi>c</mi> <mo>*</mo> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>c</mi> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </munderover> <msub> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>H</mi> <mi>l</mi> <mrow> <mi>r</mi> <mo>,</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mtext>=arg</mtext> <munder> <mi>min</mi> <mi>c</mi> <mtext></mtext> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>3</mn> </munderover> <mo>|</mo> <msubsup> <mi>h</mi> <mi>l</mi> <mrow> <mi>r</mi> <mo>,</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
for each block Qi,jTo obtain the optimal Tetrolet decomposition
Figure BDA00004721368300000510
(3) Rearranging the low and high frequency subbands, a matrix of size 2 x 2,
X | Q i , j | r = R ( X r , ( c ) ) = x r , ( c ) [ 0 ] x r , ( c ) [ 2 ] x r , ( c ) [ 1 ] x r , ( c ) [ 3 ] - - - ( 4 )
also, in the same manner as above,
H l | Q i , j r = R ( H l r , ( c * ) ) , l = 1,2,3 - - - ( 5 )
(4) after finding the optimal decomposition direction, saving the high frequency part coefficient and the direction c*For low frequency imagesThe Tetrolet decomposition continues until the J-1 layer ends. The other steps are the same as those in one of the first to third embodiments.
And subtracting the original image from the obtained low-frequency image to obtain a detail image containing most textures and edges. Because the Tetrolet transformation has better detail keeping capability, the Tetrolet transformation is adopted to transform the detail image. Aiming at the problem that the amplitude of the high-frequency sub-band coefficient is still larger after the detail image is transformed, the energy aggregation is further carried out by adopting a high-frequency sub-band self-adaptive decomposition method, so that most detail information is concentrated into a few coefficients, and more detail information is kept while the sparsity is improved. And performing inverse transformation according to the self-adaptive decomposition process, and performing inverse transformation on the transformed image of the whole detail image to obtain an approximate image of the detail image.
The fifth concrete implementation mode: the difference between this embodiment and one of the first to fourth embodiments is: the specific implementation process of the self-adaptive decomposition method in the second step (II) is as follows: the high-frequency sub-band after J-level Tetrolet transformation is set as HFm,dM denotes a scale, m =1, 2.., J, d denotes a direction, d =1,2,3, "1" denotes a horizontal direction, "2" denotes a vertical direction, "3" denotes a diagonal direction;
step 1, calculate each HFm,dAnd is denoted as Em,d
<math> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>MN</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mi>M</mi> <mo>,</mo> <mi>N</mi> </mrow> </munderover> <msup> <mrow> <mi>h</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> </mrow> </math>
M, N denotes the size of the corresponding subband, h (i, j) denotes the coefficient value of the subband at position (i, j);
step 2, judging the importance of each sub-band, and if the importance is important, continuing decomposition; for the mth level subband, m =1,2, … …, J, the judgment and decomposition process is as follows:
calculating the average energy E of three high-frequency sub-bands at the mth level of the scalem,ave
For each sub-band HF under the scalem,d,d=1,2,3,
If Em,d>=Em,ave
Then
Subband HFm,dIf the sub-band is judged to be important, performing Tetrolet decomposition on the sub-band once to generate four new sub-bands; recalculating energy for the four sub-bands and judging importance, and if so, continuing decomposition; the process is repeated until all high frequency sub-bands are inspected. The other steps are the same as those in one of the first to fourth embodiments.
The sixth specific implementation mode: the difference between this embodiment and one of the first to fifth embodiments is: in the second step, after the second (fourth) step is executed, the second (fifth) step is executed: estimating the high-frequency part of the image, wherein the process is carried out according to the following steps:
step 1, after the high-frequency sub-band is subjected to self-adaptive decomposition, most of energy of the high-frequency image is concentrated on a few coefficients, and N with a large absolute value is reserved2Setting the coefficients and setting the rest coefficients to be 0;
step 2, reserving N2The method comprises the steps that (1) a transformed image with each coefficient is subjected to Tetrolet inverse transformation on a corresponding sub-band according to the inverse sequence of self-adaptive decomposition, and then the whole image is subjected to Tetrolet transformation; finally, a high-frequency diagram is obtainedAn estimated image of the image, the result of said estimated image being noted
Figure BDA0000472136830000062
The other steps are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between this embodiment and one of the first to sixth embodiments is: from the estimation of the low-frequency part of the image
Figure BDA0000472136830000071
And estimation result of high frequency part
Figure BDA0000472136830000072
Obtaining an estimated image of the original image X
Figure BDA0000472136830000073
Is marked as
Figure BDA0000472136830000074
The other steps are the same as those in one of the first to sixth embodiments.
Effect verification:
to verify the effect of the present invention, an experiment was performed using a remote sensing image, as shown in fig. 2 (a). For the proposed hybrid transformation method, when estimating the low frequency, 9/7 biorthogonal wavelet decomposition is adopted, the decomposition layer number is five levels, and the number of reserved coefficients N1 is 6000; when detail images are estimated, five-level Tetrolet transformation is adopted, and the number of reserved coefficients N2 is 1500. For comparison, 9/7 biorthogonal wavelets, the Tetrolet transform, the Curvelet transform, and the Contourlet transform were used for comparison with the proposed method, respectively, under the same conditions. For these four methods for comparison, 7500 coefficients are retained directly after the transform. The PSNR results of the reconstructed images and the partially enlarged views of the reconstructed images are shown in fig. 2(b) - (k), and the partially enlarged regions are indicated by white frames in the original image. As can be seen from fig. 2, the proposed hybrid method performs best, the Tetrolet transform method is second and the Curvelet method is worst.
To further demonstrate the effectiveness of the present invention, two remote sensing images (a) RS1 and (b) RS2 were used as test images, each having a size of 512 x 512(262144 coefficients), as shown in FIG. 3. In the experiment, the proposed hybrid transformation method is respectively compared with 9/7 tensor wavelets, Tetrolet transformation, Curvelet transformation and Contourlet transformation, and the used hierarchical layers are all 5 levels. For the proposed hybrid transformation method, let 9/7 number of wavelet transformation retention coefficients be N1, number of Tetrolet transformation retention coefficients be N2, under N1: N2=4:1 and N1: N2=3:1, PSNR, RMSE, MAE and MAD are respectively adopted as objective evaluation indexes, and SSIM is a subjective evaluation index for experiments. The results of the PSNR comparison are shown in tables 1 and 2, and the results of the SSIM comparison are shown in tables 3 and 4. Taking RS1 as an example, the comparison results of RMSE, MAE, and MAD are shown in fig. 4, and the comparison result of RS2 is similar thereto.
TABLE 1 PSNR results obtained with different methods (N1: N2=4:1)
Figure BDA0000472136830000075
TABLE 2 PSNR results obtained with different methods (N1: N2=3:1)
Figure BDA0000472136830000076
Figure BDA0000472136830000081
The SSIM index comparison (N) obtained by the method presented in Table 3 with four other different methods1:N2=4:1)
Figure BDA0000472136830000082
TABLE 4 SSIM index comparison (N) of the proposed method with four other different methods1:N2=3:1)
Figure BDA0000472136830000083
FIG. 5 shows LL as1The polyphase decomposition process of the wavelet sub-bands is given for example (assuming LL1Is 8 x 8). This process is implemented by a p-fold filter, where p is assumed to be 4. Will LL1Sub-blocks of size 2 x 2 are divided, different blocks being represented in different colours for visual perception, and the numbers in the blocks representing the positions of the coefficients. Firstly, extracting the coefficient of the upper left corner of each block, and placing the coefficients according to the sequence of the original blocks to obtain a first component; secondly, extracting the coefficient of the upper right corner of each block, and placing the coefficients according to the sequence of the original blocks to obtain a second component; and by analogy, finally, extracting the coefficient of the lower right corner of each block, and placing according to the sequence of the original blocks, thereby obtaining a fourth component. It can be seen that the adjacent four coefficients in each block are placed exactly at the same position in the four components. Therefore, the PCA is adopted to remove the redundancy among the components, namely the redundancy of adjacent wavelet coefficients. The polyphase decomposition process for the other subbands is similar to this process.
From these results, the proposed hybrid method is superior to the other four transformation methods in both subjective and objective aspects while retaining different numbers of coefficients, further proving the effectiveness of the proposed method.

Claims (7)

1. A remote sensing image sparse estimation method based on hybrid transformation is characterized in that: the method is realized according to the following steps:
step one, carrying out sparse estimation on the low frequency of a remote sensing image:
performing tensor product wavelet transformation on an original image; step one (two), performing multi-phase decomposition on each sub-band by adopting a p-fold decimation filter; step one (three), carrying out principal component transformation on the decomposed components; step one (four), reserving N for the transformed image1A larger transform coefficient ofSetting the residue coefficient to 0; obtaining a sparse estimation result, and carrying out inverse transformation on the result;
step two, carrying out sparse estimation on the high frequency of the remote sensing image:
subtracting the original image from the low-frequency image obtained in the first step to obtain a high-frequency image containing most textures and edges; step two, transforming the high-frequency image by using Tetrolet transformation, and further performing energy aggregation on the transformed high-frequency sub-band by using a self-adaptive decomposition method; step two (three), reserving N for the transformed image2The larger transform coefficients, the other coefficients set to 0; step two (four), according to the process of the self-adaptive decomposition, carry on the corresponding inverse transformation to the high-frequency subband that has carried on the self-adaptive decomposition, and then carry on the inverse transformation to the transformation image of the whole high-frequency image, receive the result of sparse estimation;
and step three, overlapping the results of the sparse estimation in the step one and the step two to obtain a final estimation image.
2. The remote sensing image sparse estimation method based on the hybrid transformation as claimed in claim 1, wherein: in the first step, the sparse process of the low-frequency part of the remote sensing image is as follows: let the original image be
Figure FDA0000472136820000011
The specific implementation process of the step one (I) is as follows: let A denote two-dimensional tensor product wavelet transform and X denote XA as the result of X after tensor product wavelet transformTThen the transformed image is represented as:
XAT:=[(XAT)(1),(XAT)(2),…,(XAT)(N)]
wherein, "=" means "defined as", N means the number of wavelet sub-bands;
the specific implementation process of the step one (two) is as follows: for each sub-band (XA)T)(i)Generating a transform component using a p-fold filter, wherein i =1, …, N; the process is represented as:
Figure FDA0000472136820000013
symbol'
Figure FDA0000472136820000012
"represents the variables before the symbol after being processed, and can be represented as the variables after the symbol; permu represents the decomposition of sub-band and the recombination process of wavelet coefficient, firstly adopting p-fold filter to extract the coefficient for given wavelet sub-band, then recombining the extracted coefficient into transformation component according to the corresponding position;
the specific implementation process of the step one (third) is as follows: and performing principal component transformation on the transformation component generated by each wavelet sub-band: let B(i)A transform matrix representing the ith sub-band, then
Figure FDA0000472136820000014
Wherein i =1, …, N;
if the transformed image is Y, then
Y:=[B(1)permu(XAT)(1),B(2)permu(XAT)(2),…,B(N)permu(XAT)(N)]
The specific implementation process of the step one (four) is that for all the coefficients in Y, the larger N is reserved1One coefficient, and the other coefficient is set to 0, and the transformed image at this time can be expressed as
Figure FDA0000472136820000021
Then
Y ~ : = [ Y ~ ( 1 ) , Y ~ ( 2 ) , . . . , Y ~ ( N ) ] .
3. The remote sensing image sparse estimation method based on the hybrid transformation as claimed in claim 2, wherein: after the step one (four) is finished, the step one (five) is executed: the process of estimating the low-frequency part of the remote sensing image is carried out according to the following steps:
step 1, applying principal component inverse transformation to each transformation component sequenceWhere i =1,2, …, N, the process is represented as:
[ B ( 1 ) - 1 Y ~ ( 1 ) , B ( 2 ) - 1 Y ~ ( 2 ) , . . . , B ( N ) - 1 Y ~ ( N ) ] ;
step 2, based on the process of the step 1, recombining the inversely transformed component sequence, and marking as permu-1Then inverse two-dimensional tensor product wavelet transform A is applied-1Let the estimation result of the low frequency image be
Figure FDA0000472136820000025
Then
X ~ L : = [ permu - 1 ( B ( 1 ) - 1 Y ~ ( 1 ) ) , permu - 1 ( B ( 2 ) - 1 Y ~ ( 2 ) ) , . . . , permu - 1 ( B ( N ) - 1 Y ~ ( N ) ) ] A - 1 .
4. The remote sensing image sparse estimation method based on the hybrid transformation as claimed in claim 1 or 3, wherein: in the second step, the specific process of the sparsity of the high-frequency part of the remote sensing image is as follows:
the specific implementation process of the step two (one) comprises the following steps: from the original image X, and the estimation result of the low-frequency image
Figure FDA0000472136820000027
Generating a high frequency image, denoted XDThe high frequency image is obtained according to the following formula:
Figure FDA0000472136820000028
the specific implementation process of the second step (II) is as follows: for high frequency image
Figure FDA0000472136820000029
If J is the number of layers of Tetrolet transform decomposition, the number of the layers is the r-th layer, wherein r = 1. The Tetrolet decomposition is performed as follows:
(1) image processing method
Figure FDA00004721368200000210
Divided into 4 gamma 4 blocks Qi,j,i=0,1,...,M1/2r+1-1,j=0,1,...,M2/2r+1-1; by Qi,jRepresenting the divided blocks, wherein i, j represents that the block is in the ith row and the jth column in all the blocks;
(2) for each block, in 4 subsets of four-grid tiles
Figure FDA00004721368200000211
c =1,.. logue, 117, s =0,1,2,3, a tensor product wavelet transform is performed to obtain corresponding 4 low-frequency coefficients
Figure FDA00004721368200000212
And 12 Tetrolet coefficients
Figure FDA00004721368200000213
X D r , ( c ) = ( x D r , ( c ) [ s ] ) s = 0 3 - - - ( 1 )
H l r , ( c ) = ( h l r , ( c ) [ s ] ) s = 0 3 - - - ( 2 )
Wherein,
<math> <mrow> <msubsup> <mi>x</mi> <mi>D</mi> <mrow> <mi>r</mi> <mo>,</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msubsup> <mi>I</mi> <mi>s</mi> <mi>C</mi> </msubsup> </mrow> </munder> <mi>&epsiv;</mi> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mi>L</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>]</mo> <msup> <mi>x</mi> <mrow> <mi>r</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>]</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>h</mi> <mi>l</mi> <mrow> <mi>r</mi> <mo>,</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msubsup> <mi>I</mi> <mi>s</mi> <mi>C</mi> </msubsup> </mrow> </munder> <mi>&epsiv;</mi> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mi>L</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>]</mo> <msup> <mi>x</mi> <mrow> <mi>r</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>]</mo> <mo>,</mo> <mi>l</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mn>3</mn> </mrow> </math>
wherein, ε [ l, m]L, m = 0.. 3 is obtained from a tensor product wavelet transform matrix, selecting an optimal directionc*Optimizing the 12 Tetrolet coefficients and the template selected when the coefficients are minimum;
<math> <mrow> <msup> <mi>c</mi> <mo>*</mo> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>c</mi> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </munderover> <msub> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>H</mi> <mi>l</mi> <mrow> <mi>r</mi> <mo>,</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mtext>=arg</mtext> <munder> <mi>min</mi> <mi>c</mi> <mtext></mtext> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>3</mn> </munderover> <mo>|</mo> <msubsup> <mi>h</mi> <mi>l</mi> <mrow> <mi>r</mi> <mo>,</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
for each block Qi,jTo obtain the optimal Tetrolet decomposition
Figure FDA0000472136820000035
(3) Rearranging the low and high frequency subbands, a matrix of size 2 x 2,
X | Q i , j | r = R ( X r , ( c ) ) = x r , ( c ) [ 0 ] x r , ( c ) [ 2 ] x r , ( c ) [ 1 ] x r , ( c ) [ 3 ] - - - ( 4 )
also, in the same manner as above,
H l | Q i , j r = R ( H l r , ( c * ) ) , l = 1,2,3 - - - ( 5 )
(4) after finding the optimal decomposition direction, saving the high frequency part coefficient and the direction c*And continuing to perform Tetrolet decomposition on the low-frequency image until the J-1 layer is finished.
5. A method according to claim 4The remote sensing image sparse estimation method based on hybrid transformation is characterized by comprising the following steps: the specific implementation process of the self-adaptive decomposition method in the second step (II) is as follows: the high-frequency sub-band after J-level Tetrolet transformation is set as HFm,dM denotes a scale, m =1, 2.., J, d denotes a direction, d =1,2,3, "1" denotes a horizontal direction, "2" denotes a vertical direction, "3" denotes a diagonal direction;
step 1, calculate each HFm,dAnd is denoted as Em,d
<math> <mrow> <msub> <mi>E</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>MN</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mi>M</mi> <mo>,</mo> <mi>N</mi> </mrow> </munderover> <msup> <mrow> <mi>h</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> </mrow> </math>
M, N denotes the size of the corresponding subband, h (i, j) denotes the coefficient value of the subband at position (i, j);
step 2, judging the importance of each sub-band, and if the importance is important, continuing decomposition; for the mth level subband, m =1,2, …, J, the judgment and decomposition process is as follows:
calculating the average energy E of three high-frequency sub-bands at the mth level of the scalem,ave
For each sub-band HF under the scalem,d,d=1,2,3,
If Em,d>=Em,ave
Then
Subband HFm,dIf the sub-band is judged to be important, performing Tetrolet decomposition on the sub-band once to generate four new sub-bands; recalculating energy for the four sub-bands and determining importance if importantThen continue decomposing; the process is repeated until all high frequency sub-bands are inspected.
6. The remote sensing image sparse estimation method based on the hybrid transformation as claimed in claim 3, wherein: in the second step, after the second (fourth) step is executed, the second (fifth) step is executed: estimating the high-frequency part of the image, wherein the process is carried out according to the following steps:
step 1, after the high-frequency sub-band is subjected to self-adaptive decomposition, most of energy of the high-frequency image is concentrated on a few coefficients, and N with a large absolute value is reserved2Setting the coefficients and setting the rest coefficients to be 0;
step 2, reserving N2The method comprises the steps that (1) a transformed image with each coefficient is subjected to Tetrolet inverse transformation on a corresponding sub-band according to the inverse sequence of self-adaptive decomposition, and then the whole image is subjected to Tetrolet transformation; and finally, obtaining an estimated image of the high-frequency image, and recording the result of the estimated image as
7. The remote sensing image sparse estimation method based on the hybrid transformation as claimed in claim 6, wherein: from the estimation of the low-frequency part of the image
Figure FDA0000472136820000042
And estimation result of high frequency part
Figure FDA0000472136820000043
Obtaining an estimated image of the original image X
Figure FDA0000472136820000044
Is marked as X ~ : = X ~ L + X ~ D .
CN201410074775.3A 2014-03-03 2014-03-03 A kind of sparse method of estimation of remote sensing images based on mixing transformation Active CN103824263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410074775.3A CN103824263B (en) 2014-03-03 2014-03-03 A kind of sparse method of estimation of remote sensing images based on mixing transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410074775.3A CN103824263B (en) 2014-03-03 2014-03-03 A kind of sparse method of estimation of remote sensing images based on mixing transformation

Publications (2)

Publication Number Publication Date
CN103824263A true CN103824263A (en) 2014-05-28
CN103824263B CN103824263B (en) 2016-09-14

Family

ID=50759309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410074775.3A Active CN103824263B (en) 2014-03-03 2014-03-03 A kind of sparse method of estimation of remote sensing images based on mixing transformation

Country Status (1)

Country Link
CN (1) CN103824263B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657944A (en) * 2014-12-31 2015-05-27 中国科学院遥感与数字地球研究所 Compressed sensing remote sensing image reconstruction method based on reference image texture constraints
CN106612438A (en) * 2016-01-28 2017-05-03 四川用联信息技术有限公司 Image compression method based on overlapping district advanced wavelet transformation technique
CN106651820A (en) * 2016-09-23 2017-05-10 西安电子科技大学 Sparse tensor neighborhood embedding-based remote sensing image fusion method
CN107967676A (en) * 2017-11-10 2018-04-27 安徽大学 A kind of steady Tetrolet becomes scaling method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JENS KROMMWEH: "《Tetrolet transform: A new adaptive Haar wavelet algorithm for sparse》", 《J. VIS. COMMUN. IMAGE R.》 *
彭洲等: "《基于Tetrolet变换的图像稀疏逼近算法》", 《系统工程与电子技术》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657944A (en) * 2014-12-31 2015-05-27 中国科学院遥感与数字地球研究所 Compressed sensing remote sensing image reconstruction method based on reference image texture constraints
CN104657944B (en) * 2014-12-31 2018-08-14 中国科学院遥感与数字地球研究所 Based on the compressed sensing remote sensing images method for reconstructing with reference to image texture constraint
CN106612438A (en) * 2016-01-28 2017-05-03 四川用联信息技术有限公司 Image compression method based on overlapping district advanced wavelet transformation technique
CN106651820A (en) * 2016-09-23 2017-05-10 西安电子科技大学 Sparse tensor neighborhood embedding-based remote sensing image fusion method
CN106651820B (en) * 2016-09-23 2019-06-21 西安电子科技大学 Remote sensing image fusion method based on sparse tensor neighbour insertion
CN107967676A (en) * 2017-11-10 2018-04-27 安徽大学 A kind of steady Tetrolet becomes scaling method
CN107967676B (en) * 2017-11-10 2022-01-11 安徽大学 Steady Tetrolet transformation algorithm

Also Published As

Publication number Publication date
CN103824263B (en) 2016-09-14

Similar Documents

Publication Publication Date Title
CN105913393A (en) Self-adaptive wavelet threshold image de-noising algorithm and device
CN103093433B (en) Natural image denoising method based on regionalism and dictionary learning
CN103824263B (en) A kind of sparse method of estimation of remote sensing images based on mixing transformation
CN102346908B (en) SAR (Synthetic Aperture Radar) image speckle reduction method based on sparse representation
CN103208097B (en) Filtering method is worked in coordination with in the principal component analysis of the multi-direction morphosis grouping of image
CN101477681B (en) Wavelet image denoising process based on adaptive sliding window adjacent region threshold
CN103761719B (en) A kind of adaptive wavelet threshold denoising method based on neighborhood relevance
CN103945217B (en) Based on complex wavelet domain half-blindness image quality evaluating method and the system of entropy
CN104182939B (en) Medical image detail enhancement method
CN101477680A (en) Wavelet image denoising process based on sliding window adjacent region data selection
CN100433795C (en) Method for image noise reduction based on transforming domain mathematics morphology
Ismail et al. Image de-noising with a new threshold value using wavelets
CN102426701A (en) Underwater sonar image denoising method based on dual-tree complex wavelet transform and PCA
CN103020922A (en) PCA (principal component analysis) transformation based SAR (synthetic aperture radar) image speckle suppression method
CN112801897B (en) Image denoising method based on wide convolution neural network
CN103475897A (en) Adaptive image quality evaluation method based on distortion type judgment
CN112465687B (en) Image hiding method and device
CN107194903A (en) A kind of multi-focus image fusing method based on wavelet transformation
CN104732498B (en) A kind of thresholded image denoising method based on non-downsampling Contourlet conversion
CN103903228A (en) Non-local image denoising method based on HWD conversion
CN103839239A (en) Self-adaption denoising method for cable porcelain shell terminal infrared images
CN104809714A (en) Image fusion method based on multi-morphological sparse representation
Du et al. T-SVD-based robust color image watermarking
CN103745442A (en) Non-local wavelet coefficient contraction-based image denoising method
CN101296312A (en) Wavelet and small curve fuzzy self-adapting conjoined image denoising method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190701

Address after: 150000 Heilongjiang Harbin Dalian economic and Trade Zone, the North Road and Xingkai Road intersection

Patentee after: HIT ROBOT GROUP Co.,Ltd.

Address before: 150001 No. 92 West straight street, Nangang District, Heilongjiang, Harbin

Patentee before: Harbin Institute of Technology

TR01 Transfer of patent right
PP01 Preservation of patent right

Effective date of registration: 20240715

Granted publication date: 20160914

PP01 Preservation of patent right
CI03 Correction of invention patent

Correction item: preservation of patent right

Correct: Revoke the announcement

False: Registration effective date: July 15, 2024

Number: 31-02

Volume: 40

PP01 Preservation of patent right

Effective date of registration: 20240626

Granted publication date: 20160914