CN104268833A - New image fusion method based on shift invariance shearlet transformation - Google Patents

New image fusion method based on shift invariance shearlet transformation Download PDF

Info

Publication number
CN104268833A
CN104268833A CN201410470345.3A CN201410470345A CN104268833A CN 104268833 A CN104268833 A CN 104268833A CN 201410470345 A CN201410470345 A CN 201410470345A CN 104268833 A CN104268833 A CN 104268833A
Authority
CN
China
Prior art keywords
matrix
image
fusion
frequency sub
centerdot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410470345.3A
Other languages
Chinese (zh)
Other versions
CN104268833B (en
Inventor
罗晓清
张战成
张翠英
吴小俊
吴兆明
李丽兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201410470345.3A priority Critical patent/CN104268833B/en
Publication of CN104268833A publication Critical patent/CN104268833A/en
Application granted granted Critical
Publication of CN104268833B publication Critical patent/CN104268833B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a new image fusion method based on shift invariance shearlet transformation (SIST). The method includes the implementation steps of firstly, conducting multi-scale multi-direction decomposition on a source image through SIST to obtain low-frequency sub-band coefficients and high-frequency sub-band coefficients; secondly, reflecting the outline information of the image through low-frequency sub-bands, obtaining local structure descriptors for identifying the image resolution through a local structure tensor singular value decomposition method, making the local structure descriptors serve as movement measure functions in the fusion strategy, and adopting large strategy fusion. High-frequency sub-bands display detail information of the image. The invention provides a new edge strength measurement method, and a multi-strategy fusion rule based on the sigmoid function and edge strength measurement is made and used for high-frequency sub-band fusion. Finally, reverse SIST is conducted on the obtained fusion coefficients to obtain a final fusion image. The method overcomes the defect that edge distortion is easily caused by a traditional image fusion method, and more edge and detail information is reserved through the fused image.

Description

Based on the New Image Fusion of translation invariant shearing wave conversion
Technical field
The invention belongs to image co-registration and application thereof, especially relate to a kind of New Image Fusion based on the conversion of translation invariant shearing wave.
Background technology
Image co-registration is the important branch of of data fusion, is a focus in current information control fusion.Image co-registration refers to the process that the complementation that obtain multiple imageing sensor or redundant information are gathered, thus makes fused images be more suitable for visually-perceptible or the computer disposal of the mankind.Image fusion technology is widely used in multiple fields such as military affairs, remote sensing, medical science.
From current research method, the method for image co-registration has two large classes: based on spatial domain with based on transform domain.Be directly be weighted on average to the pixel of original image based on the simplest method of spatial domain, realize simple, but fusion results contrast reduce.Image interfusion method based on transform domain is used widely in image co-registration field.Conventional transform domain method such as Laplacian Pyramid Transform, grad pyramid convert, wavelet transformation etc.But pyramid method cannot obtain directivity information.Wavelet transformation can only catch the information in level, vertical and three directions, diagonal angle, and effectively can only process the function class containing " putting unusual ", helpless to the high-dimension function with " line is unusual " or " face is unusual ".Therefore, multiscale analysis obtains to be paid close attention to widely, and conventional multiscale analysis method comprises ridge ripple (Ridgelet) conversion, Qu Bo (Curvelet) conversion, profile ripple (Contourlet) conversion, shearing wave (Shearlet) conversion etc.Ridgelet is applicable to catching straight line information, and when processing curve, effect is bad.Curvelet realizes too complicated.Contourlet transformation has very strong set direction ability, effectively can represent the directive singularity characteristics of tool in signal, but owing to lacking translation invariance, near singular point, easily introduces pseudo-Gibbs phenomenon.In order to overcome this shortcoming, the people such as Cunha propose non-down sampling contourlet transform (NSCT), and NSCT has translation invariance, but NSCT computation complexity is higher, realize complicated.In order to solve above-mentioned conversion Problems existing, the people such as Guo propose shearing wave conversion (ST) with synthesis expansion Affine Systems, compared to multi-scale transform instrument conventional in other image co-registration, it can detect the direction of singular point accurately, and provides the optimum of image sparse approximate.But, owing to have employed down-sampling operation in shearing wave discretization process, lack translation invariance, near singular point, easily produce pseudo-Gibbs phenomenon when image co-registration.In order to overcome the problems referred to above, it is theoretical that the people such as Easley propose translation invariant shearing wave conversion (SIST), it possesses all advantages of current popular multiscale analysis and implementation procedure does not have down-sampling to operate, thus possess translation invariance, these advantages make SIST very promising in image co-registration field.
Although SIST is image co-registration provide effective instrument, a good blending algorithm not only depends on effective disassembling tool, also depends on the design of fusion rule.Convergence strategy conventional in fusion rule can be divided into two large classes: " getting large " and " weighted mean "." get large " convergence strategy according to the large coefficient of the feature selecting eigenwert extracted as fusion coefficients; Such as: the image interfusion method based on NSCT that the people such as Wang propose, see Image fusion algorithm based on nonsubsampled contourlet transform, Future Computer and Communication (ICFCC), 20102nd International Conference on.IEEE, 2010,1:V1-220-V1-224, the method extracts spatial frequency features, adopts " getting large " convergence strategy to select fusion coefficients according to spatial frequency; The multi-focus image fusing method based on Shearlet and local energy that the people such as Li proposes, see Multi-focus image fusion based on shearlet and local energy, Signal Processing Systems (ICSPS), 20102nd International Conference on.IEEE, 2010,1:V1-632-V1-635, the method extracts local energy feature, chooses the larger coefficient of local energy as fusion coefficients.The more convergence strategy of another kind of use is " weighted mean ", and the characteristic value normalization of extraction is obtained fusion coefficients as weights through average weighted process by " weighted mean " convergence strategy.Such as: the weighted image fusion method converted based on multiple dimensioned Top-hat that the people such as Bai propose, see Weighted image fusion based on multi-scale top-hat transform:Algorithms and a comparison study, Optik-International Journal for Light and Electron Optics, 2013,124 (13): 1660-1668, the method utilizes the standard deviation, average and the Information Entropy Features that extract, adopts " weighted mean " convergence strategy to obtain fusion coefficients; The people such as Javed propose based on fuzzy logic and the MRI of image local feature and the fusion method of PET image, see MRI and PET Image Fusion Using Fuzzy Logic and Image Local Features, The Scientific World Journal, 2014,2014, the method uses " weighted mean " convergence strategy, utilizes the local feature of extraction and fuzzy logic to calculate weights and merges.
As can be seen from the design of above-mentioned fusion rule, when showing as complementarity between image, often adopt " getting large " strategy, when showing as redundancy between image, usually adopt " weighted mean " strategy.But consider that between multi-source image, existing complementarity has redundancy again.Therefore, single convergence strategy is only relied on effectively can not to extract the information of image.
Summary of the invention
For the defect existed in above-mentioned prior art or deficiency, the object of the invention is to, propose a kind of New Image Fusion based on the conversion of translation invariant shearing wave, thus reach the object improving fused image quality.
Technical scheme of the present invention is, based on the New Image Fusion of translation invariant shearing wave conversion, it is characterized in that: comprise the steps:
1) prepare two width source images to be fused, utilize SIST to be low frequency sub-band and high-frequency sub-band coefficient by two width picture breakdowns respectively;
2) different fusion rules is adopted to merge respectively to low frequency sub-band coefficient and high-frequency sub-band coefficient:
2.1) for low frequency sub-band coefficient, obtained the local structure descriptor of recognisable image sharpness by local structure tensor singular value decomposition method, in this, as the movable measure function in convergence strategy, adopt and get large convergence strategy and merge;
2.2) for high-frequency sub-band coefficient, construct a kind of new edge strength metric form, adopt the how tactful fusion rule measured based on sigmoid function and edge strength to merge;
3) to step 2) fusion coefficients that obtains carries out SIST inverse transformation and obtains fused images.
Described step 1) be specially: by two width M × N source images A and B to be fused, utilize SIST to be low frequency sub-band and high-frequency sub-band coefficient by two width picture breakdowns respectively: with wherein, with for low frequency sub-band coefficient; with for a series of high-frequency sub-band coefficient.
Described step 2.1) comprise the steps:
A) for 1 f (x, y) of certain in image, its gradient is g=▽ f (x, y), and at t × t neighborhood of f (x, y), the local gradient vectors of this point is:
J = · · · · · · g x ( k ) g y ( k ) · · · · · ·
Wherein, k=1,2 ..., t 2; with be respectively the derivative in x and y direction;
The local structure tensor then putting f (x, y) is:
T = J T J = Σ k = 1 t 2 g x ( k ) g x ( k ) Σ k = 1 t 2 g x ( k ) g y ( k ) Σ k = 1 t 2 g x ( k ) g y ( k ) Σ k = 1 t 2 g y ( k ) g y ( k )
Wherein, k=1,2 ..., t 2; with be respectively the derivative in x and y direction;
Local structure tensor T carries out SVD and is decomposed into:
T = USV T = U s 1 0 0 s 2 [ v 1 , v 2 ]
Wherein, U and V is orthogonal matrix; s 1and s 2it is eigenwert; v 1and v 2it is eigenwert characteristic of correspondence vector;
The local description Q of dimensioned plan image sharpness is:
Q = ( s 1 - s 2 ) 2 ( s 1 - s 2 s 1 + s 2 ) 2
B) adopt and get large convergence strategy based on local description Q and merge:
G j 0 F ( x , y ) = C j 0 A if Q j 0 A ( x , y ) ≥ Q j 0 B ( x , y ) G j 0 B ( x , y ) otherwise
Wherein, with represent the low frequency coefficient that source images A, B and fused images F is corresponding at point (x, y) place respectively.
Described step 2.2) comprise the steps:
A) edge strength of high-frequency sub-band coefficient is calculated
1. horizontal edge intensity
Centered by point (x, y), open the window neighborhood of n × n, obtain image block X, x ijfor the pixel in image block, every a line of image block X is regarded as observed reading, each row regards the unbiased esti-mator C that variable calculates its covariance matrix as h(x)
C h ( x ) = 1 ( n - 1 ) Σ i = 1 n ( x i - x ‾ ) ( x i - x ‾ ) T
Wherein, x ifor n ties up i-th observed reading of variable; for the average of observed reading;
Compute matrix C hx the eigenwert of (), first utilizes SVD diagonalizable matrix C hx (), obtains singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tΣ, the diagonal angle of matrix Λ represents C hthe eigenwert of (x), C hthe eigenvalue of maximum λ of (x) hfor horizontal edge intensity, namely
λ h = max ( eigen C h ( x ) )
2. vertical edge intensity
Each row of image block X are regarded as observed reading, and every a line regards the unbiased esti-mator C that variable calculates its covariance matrix as v(y)
C v ( y ) = 1 ( n - 1 ) Σ i = 1 n ( y i - x ‾ ) ( y i - x ‾ ) T
Wherein, y ifor n ties up i-th observed reading of variable; for the average of observed reading;
Calculate C vthe eigenvalue of maximum λ of (y) v, first utilize SVD diagonalizable matrix C vy (), obtains singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tΣ, the diagonal angle of matrix Λ represents C vthe eigenwert of (y), C vthe eigenvalue of maximum λ of (y) vfor vertical edge intensity, namely
λ v = max ( eigen C v ( y ) )
3. diagonal angle edge strength
The master of image block X becomes image block Z to angular direction pixel groups 1, secondary image block Z is become to angular direction pixel groups 2, be respectively image block Z 1, Z 2in pixel, image block Z 1every a line regard observed reading as, each row is regarded variable as and is calculated the unbiased esti-mator of covariance matrix similarly, image block Z 2every a line regard observed reading as, each row is regarded variable as and is calculated the unbiased esti-mator of covariance matrix z 1and Z 2the unbiased esti-mator of covariance matrix with be calculated as follows:
C D 1 ( z 1 ) = 1 ( n - 1 ) Σ i = 1 n ( z i 1 - z 1 ‾ ) ( z i 1 - z 1 ‾ ) T
C D 2 ( z 2 ) = 1 ( n - 1 ) Σ i = 1 n ( z i 2 - z 2 ‾ ) ( z i 2 - z 2 ‾ ) T
Wherein, z 1 ‾ = 1 n 2 Σ i = 1 n Σ j = 1 n z ij 1 , z 2 ‾ = 1 n 2 Σ i = 1 n Σ j = 1 n z ij 2 ; for n ties up i-th observed reading of variable; for the average of observed reading;
Compute matrix with eigenwert, first utilize SVD diagonalizable matrix obtain singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tthe diagonal angle representative of Σ, matrix Λ eigenwert, calculate diagonal matrix eigenvalue of maximum similarly, calculate eigenvalue of maximum with for diagonal angle edge strength is
λ D 1 = max ( eigen C D 1 ( z 1 ) )
λ D 2 = max ( eigen C D 2 ( z 2 ) )
Edge strength S is horizontal edge intensity λ h, vertical edge intensity λ vwith two diagonal angle edge strengths and, namely
S = λ h + λ v + λ D 1 + λ D 2
B) the many strategies based on sigmoid function merge
Adopt the how tactful fusion rule based on sigmoid function to high-frequency sub-band, weighting coefficient ω is calculated by sigmoid function, namely
C j F ( x , y ) = ω ( x , y ) × C j A ( x , y ) + ( 1 - ω ( x , y ) ) × C j B ( x , y )
ω ( x , y ) = ( S j A ( x , y ) / S j B ( x , y ) ) k 1 + ( S j A ( x , y ) / S j B ( x , y ) ) k
Wherein, with represent the high frequency coefficient that source images A, B and fused images F is corresponding at point (x, y) place respectively; represent the edge strength of high-frequency sub-band coefficient; K is contraction factor.
The present invention compared with prior art, its remarkable advantage: (1) adopts effective transformation tool SIST, there is directional sensitivity and translation invariance, can multiple dimensioned multi-direction ground Description Image, it is the real two-dimentional rarefaction representation of picture signal (as edge).(2) utilize the partial structurtes feature of local structure tensor singular value decomposition method estimated image, obtain a kind of descriptor of recognisable image sharpness.(3) construct a kind of edge strength metric form of new identification details, this tolerance effectively can extract horizontal edge, the equilateral edge minutia of vertical edge and edge, diagonal angle of image.(4) utilize the how tactful fusion rule of new edge strength metric form and sigmoid construction of function, this rule utilizes sigmoid function can select convergence strategy dynamically according to the complementation of image and redundant attributes.The fused images texture-rich that the inventive method generates, details is given prominence to, and has good stability and practicality.
Accompanying drawing explanation
Fig. 1 is schematic flow sheet of the present invention.
Fig. 2 a is ' Clock ' right focusedimage to be fused.
Fig. 2 b is ' Clock ' left focusedimage to be fused.
Fig. 2 c is the fusion results of the present invention to Fig. 2 a and Fig. 2 b.
Fig. 2 d be based on grad pyramid conversion method to the fusion results of Fig. 2 a and Fig. 2 b.
Fig. 2 e is to the fusion results of Fig. 2 a and Fig. 2 b based on the method for translation invariant wavelet.
Fig. 2 f is that the method for Based PC NN and Shearlet is to the fusion results of Fig. 2 a and Fig. 2 b.
Fig. 3 a is remote sensing 3-band image to be fused.
Fig. 3 b is remote sensing 8-band image to be fused.
Fig. 3 c is the fusion results of the present invention to Fig. 3 a and Fig. 3 b.
Fig. 3 d be based on grad pyramid conversion method to the fusion results of Fig. 3 a and Fig. 3 b.
Fig. 3 e is to the fusion results of Fig. 3 a and Fig. 3 b based on the method for translation invariant wavelet.
Fig. 3 f is that the method for Based PC NN and Shearlet is to the fusion results of Fig. 3 a and Fig. 3 b.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described in further detail.As shown in Figure 1, comprise the steps:
1) prepare two width M × N (in the present embodiment M=256, N=256) source images A and B to be fused, utilize SIST to be low frequency sub-band and high-frequency sub-band coefficient by two width picture breakdowns respectively: with wherein, with for low frequency sub-band coefficient; with for a series of high-frequency sub-band coefficient.The wave filter that SIST adopts is " maxflat ", and the direction number of decomposition is 6,6,6.
2) different fusion rules is adopted to merge respectively to low frequency sub-band coefficient and high-frequency sub-band coefficient:
2.1) for low frequency sub-band coefficient, obtained the local structure descriptor of recognisable image sharpness by local structure tensor singular value decomposition method, in this, as the movable measure function in convergence strategy, adopt and get large convergence strategy and merge;
A) for 1 f (x, y) of certain in image, its gradient is g=▽ f (x, y), and at t × t (t=3 of the present invention) neighborhood of f (x, y), the local gradient vectors of this point is:
J = · · · · · · g x ( k ) g y ( k ) · · · · · ·
Wherein, k=1,2 ..., t 2; with be respectively the derivative in x and y direction;
The local structure tensor then putting f (x, y) is:
T = J T J = Σ k = 1 t 2 g x ( k ) g x ( k ) Σ k = 1 t 2 g x ( k ) g y ( k ) Σ k = 1 t 2 g x ( k ) g y ( k ) Σ k = 1 t 2 g y ( k ) g y ( k )
Wherein, k=1,2 ..., t 2; with be respectively the derivative in x and y direction;
Local structure tensor T carries out SVD and is decomposed into:
T = USV T = U s 1 0 0 s 2 [ v 1 , v 2 ]
Wherein, U and V is orthogonal matrix; s 1and s 2it is eigenwert; v 1and v 2it is eigenwert characteristic of correspondence vector;
The local description Q of dimensioned plan image sharpness is:
Q = ( s 1 - s 2 ) 2 ( s 1 - s 2 s 1 + s 2 ) 2
B) adopt and get large strategy based on local description Q and merge:
G j 0 F ( x , y ) = C j 0 A if Q j 0 A ( x , y ) ≥ Q j 0 B ( x , y ) G j 0 B ( x , y ) otherwise
Wherein, with represent the low frequency coefficient that source images A, B and fused images F is corresponding at point (x, y) place respectively.
2.2) for high-frequency sub-band coefficient, calculate the edge strength of high frequency coefficient pixel, adopt the how tactful fusion rule measured based on sigmoid function and edge strength to merge;
A) edge strength of high-frequency sub-band coefficient is calculated
1. horizontal edge intensity
Centered by point (x, y), open the window neighborhood of n × n (n=3 of the present invention), obtain image block X, x ijfor the pixel in image block, every a line of image block X is regarded as observed reading, each row regards the unbiased esti-mator C that variable calculates its covariance matrix as h(x)
C h ( x ) = 1 ( n - 1 ) Σ i = 1 n ( x i - x ‾ ) ( x i - x ‾ ) T
Wherein, x ifor n ties up i-th observed reading of variable; for the average of observed reading;
Compute matrix C hx the eigenwert of (), first utilizes SVD diagonalizable matrix C hx (), obtains singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tΣ, the diagonal angle of matrix Λ represents C hthe eigenwert of (x), C hthe eigenvalue of maximum λ of (x) hfor horizontal edge intensity, namely
λ h = max ( eigen C h ( x ) )
2. vertical edge intensity
Each row of image block X are regarded as observed reading, and every a line regards the unbiased esti-mator C that variable calculates its covariance matrix as v(y)
C v ( y ) = 1 ( n - 1 ) Σ i = 1 n ( y i - x ‾ ) ( y i - x ‾ ) T
Wherein, y ifor n ties up i-th observed reading of variable; for the average of observed reading;
Calculate C vthe eigenvalue of maximum λ of (y) v, first utilize SVD diagonalizable matrix C vy (), obtains singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tΣ, the diagonal angle of matrix Λ represents C vthe eigenwert of (y), C vthe eigenvalue of maximum λ of (y) vfor vertical edge intensity, namely
λ v = max ( eigen C v ( y ) )
3. diagonal angle edge strength
The master of image block X becomes image block Z to angular direction pixel groups 1, secondary image block Z is become to angular direction pixel groups 2, be respectively image block Z 1, Z 2in pixel, image block Z 1every a line regard observed reading as, each row is regarded variable as and is calculated the unbiased esti-mator of covariance matrix similarly, image block Z 2every a line regard observed reading as, each row is regarded variable as and is calculated the unbiased esti-mator of covariance matrix z 1and Z 2the unbiased esti-mator of covariance matrix with be calculated as follows:
C D 1 ( z 1 ) = 1 ( n - 1 ) Σ i = 1 n ( z i 1 - z 1 ‾ ) ( z i 1 - z 1 ‾ ) T
C D 2 ( z 2 ) = 1 ( n - 1 ) Σ i = 1 n ( z i 2 - z 2 ‾ ) ( z i 2 - z 2 ‾ ) T
Wherein, z 1 ‾ = 1 n 2 Σ i = 1 n Σ j = 1 n z ij 1 , z 2 ‾ = 1 n 2 Σ i = 1 n Σ j = 1 n z ij 2 ; for n ties up i-th observed reading of variable; for the average of observed reading;
Compute matrix with eigenwert, first utilize SVD diagonalizable matrix obtain singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tthe diagonal angle representative of Σ, matrix Λ eigenwert, calculate diagonal matrix eigenvalue of maximum similarly, calculate eigenvalue of maximum with for diagonal angle edge strength is
λ D 1 = max ( eigen C D 1 ( z 1 ) )
λ D 2 = max ( eigen C D 2 ( z 2 ) )
Edge strength S is horizontal edge intensity λ h, vertical edge intensity λ vwith two diagonal angle edge strengths and, namely
S = λ h + λ v + λ D 1 + λ D 2
B) the many strategies based on sigmoid function merge
Adopt the how tactful fusion rule based on sigmoid function to high-frequency sub-band, weighting coefficient ω is calculated by sigmoid function, namely
C j F ( x , y ) = ω ( x , y ) × C j A ( x , y ) + ( 1 - ω ( x , y ) ) × C j B ( x , y )
ω ( x , y ) = ( S j A ( x , y ) / S j B ( x , y ) ) k 1 + ( S j A ( x , y ) / S j B ( x , y ) ) k
Wherein, with represent the high frequency coefficient that source images A, B and fused images F is corresponding at point (x, y) place respectively; represent the edge strength of high-frequency sub-band coefficient; K is contraction factor (in the present embodiment k=80).
3) to fusion coefficients carry out SIST inverse transformation and obtain fused images F.
Effect of the present invention can be described further by following experimental result:
1, experiment condition and method
Hardware platform is: Intel (R) processor, CPU frequency 1.80GHz, internal memory 1.0GB;
Software platform is: MATLAB R2009a; Adopt the source images of two groups of registrations in experiment, i.e. multi-focus ' Clock ' image and remote sensing ' Band ' image, image size is 256 × 256, bmp form.' Clock ' source images is shown in that Fig. 2 (a) and Fig. 2 (b), Fig. 2 (a) are the source images that ' Clock ' image focuses on right side, and Fig. 2 (b) is the source images that ' Clock ' image focuses on left side.' Band ' source images is shown in Fig. 3 (a) and 3 (b), and Fig. 3 (a) is ' Band ' remote sensing 3-band source images, and Fig. 3 (b) is ' Band ' remote sensing 8-band source images.
Existing three kinds of fusion methods method is as a comparison adopted during experiment, wherein:
Method 1 is the fusion method based on grad pyramid conversion;
Method 2 is the fusion method based on translation invariant wavelet;
Method 3 is the fusion method of Based PC NN and Shearlet, see article " A Novel Algorithm of Image Fusion Based on PCNN and Shearlet ", International Journal of Digital Content Technology & its Applications, 2011,5 (12).
2, content is emulated
Emulation one: follow technical scheme of the present invention, ' Clock ' multi-focus source images (see Fig. 2 (a) and Fig. 2 (b)) is merged, the simulation experiment result that Fig. 2 (c)-Fig. 2 (f) is the inventive method and control methods.From subjective vision effect, the fused images contrast of method 1 is lower; The fused images of method 2 is fuzzyyer; On the left of the fused images of method 3, alarm clock is fuzzyyer; Fused images overall brightness of the present invention is moderate, clear picture, remains edge and the detailed information of image.
Emulation two: follow technical scheme of the present invention, the remote sensing images of one group of different-waveband (see Fig. 3 (a) and Fig. 3 (b)) are merged, the simulation experiment result that Fig. 3 (c)-Fig. 3 (f) is the inventive method and control methods.From subjective vision effect, the fused images contrast of method 1 reduces, and details is fuzzy; The fused images visual effect of method 2 makes moderate progress, but still undesirable; The fused images contrast of method 3 reduces; Fused images feature rich of the present invention, details is clear, and whole structure is desirable.
The fusion results of fusion results of the present invention and control methods is carried out objective indicator evaluation.
Table 1 gives the objective evaluation index of multiple focussing image ' Clock ' fusion results,
Table 2 gives the objective evaluation index of remote sensing images ' Band ' fusion results,
Wherein, optimal data represents with the form of overstriking.
The objective evaluation index of table 1. multi-focus image fusion result
Algorithm STD En AG SF Q AB/F
Method 1 46.7559 7.2604 8.5311 14.7576 0.5113
Method 2 48.9593 7.2860 9.0084 15.8921 0.5815
Method 3 49.9596 7.3751 9.5936 17.4871 0.5835
The present invention 50.8390 7.3564 11.6653 21.6307 0.6548
The objective evaluation index of table 2. remote sensing image fusion result
Algorithm STD En AG SF Q AB/F
Method 1 57.5358 7.2013 10.2872 21.9183 0.4807
Method 2 61.7093 7.1090 11.9365 26.2345 0.5731
Method 3 61.5508 7.3671 13.0376 27.8268 0.5812
The present invention 64.2883 7.2732 16.0422 34.2710 0.6213
STD in table 1, table 2 is standard deviation, and En is information entropy, and AG is average gradient, and SF is spatial frequency, Q aB/Ffor edge transition rate.
Standard deviation STD: the dispersion degree representing gray-scale value versus grayscale average, standard deviation is larger, and gray level is overstepping the bounds of propriety loose, and it is more that image comprises information.
Information entropy En: represent image carry information amount number, entropy is larger, and the quantity of information comprised is more, and syncretizing effect is better.
Average gradient AG: the sharpness representing image, its value is larger, and image is more clear.
Spatial frequency SF: the overall active degree in representation space territory, its value is larger, and syncretizing effect is better.
Edge transition rate Q aB/F: represent that marginal information transfers to the quantity of information of fused images from source images, its value is larger, and the edge of fused images is more clear, and syncretizing effect is better.
As can be seen from Table 1 and Table 2, objective indicator of the present invention other indexs except En are all better than control methods.
As can be seen from the fusion results of each emulation experiment, the fused images overall situation of the present invention is clear, fused images abundant information, containing abundant edge and detailed information.That subjective vision and objective evaluation all examine validity of the present invention.

Claims (4)

1., based on the New Image Fusion of translation invariant shearing wave conversion, it is characterized in that, comprise the steps:
1) prepare two width source images to be fused, utilizing translation invariant shearing wave to convert is low frequency sub-band and high-frequency sub-band coefficient by two width picture breakdowns respectively;
2) different fusion rules is adopted to merge respectively to low frequency sub-band coefficient and high-frequency sub-band coefficient:
2.1) for low frequency sub-band coefficient, obtained the local structure descriptor of recognisable image sharpness by local structure tensor singular value decomposition method, in this, as the movable measure function in convergence strategy, adopt and get large convergence strategy and merge;
2.2) for high-frequency sub-band coefficient, construct a kind of edge strength metric form, adopt the how tactful fusion rule measured based on sigmoid function and edge strength to merge;
3) to step 2) fusion coefficients that obtains carries out the inverse transformation of translation invariant shearing wave and obtains fused images.
2. the New Image Fusion based on the conversion of translation invariant shearing wave according to claim 1, is characterized in that, described step 1) comprising:
By two width M × N source images A and B to be fused, utilizing translation invariant shearing wave to convert is low frequency sub-band and high-frequency sub-band coefficient by two width picture breakdowns respectively: with wherein, with for low frequency sub-band coefficient; with for a series of high-frequency sub-band coefficient.
3. the New Image Fusion based on the conversion of translation invariant shearing wave according to claim 1, is characterized in that, described step 2.1) comprise the steps:
A) for 1 f (x, y) of certain in image, its gradient is g=▽ f (x, y), and at t × t neighborhood of f (x, y), the local gradient vectors of this point is:
J = · · · · · · g x ( k ) g y ( k ) · · · · · ·
Wherein, k=1,2 ..., t 2; with be respectively the derivative in x and y direction;
The local structure tensor then putting f (x, y) is:
T = J T J = Σ k = 1 t 2 g x ( k ) g x ( k ) Σ k = 1 t 2 g x ( k ) g y ( k ) Σ k = 1 t 2 g x ( k ) g y ( k ) Σ k = 1 t 2 g y ( k ) g y ( k )
Wherein, k=1,2 ..., t 2; with be respectively the derivative in x and y direction;
Local structure tensor T carries out SVD and is decomposed into:
T = USV T = U s 1 0 0 s 2 [ v 1 , v 2 ]
Wherein, U and V is orthogonal matrix; s 1and s 2it is eigenwert; v 1and v 2it is eigenwert characteristic of correspondence vector;
The local description Q of dimensioned plan image sharpness is:
Q = ( s 1 - s 2 ) 2 ( s 1 - s 2 s 1 + s 2 ) 2
B) adopt and get large convergence strategy based on local description Q and merge:
G j 0 F ( x , y ) = C j 0 A if Q j 0 A ( x , y ) ≥ Q j 0 B ( x , y ) G j 0 B ( x , y ) otherwise
Wherein, with represent the low frequency coefficient that source images A, B and fused images F is corresponding at point (x, y) place respectively.
4. the New Image Fusion based on the conversion of translation invariant shearing wave according to claim 1, is characterized in that, described step 2.2) comprise the steps:
A) edge strength of high-frequency sub-band coefficient is calculated
1. horizontal edge intensity
Centered by point (x, y), open the window neighborhood of n × n, obtain image block X, x ijfor the pixel in image block, every a line of image block X is regarded as observed reading, each row regards the unbiased esti-mator C that variable calculates its covariance matrix as h(x)
C h ( x ) = 1 ( n - 1 ) Σ i = 1 n ( x i - x ‾ ) ( x i - x ‾ ) T
Wherein, x ifor n ties up i-th observed reading of variable; for the average of observed reading;
Compute matrix C hx the eigenwert of (), first utilizes SVD diagonalizable matrix C hx (), obtains singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tΣ, the diagonal angle of matrix Λ represents C hthe eigenwert of (x), C hthe eigenvalue of maximum λ of (x) hfor horizontal edge intensity, namely
λ h = max ( eigen C h ( x ) )
2. vertical edge intensity
Each row of image block X are regarded as observed reading, and every a line regards the unbiased esti-mator C that variable calculates its covariance matrix as v(y)
C v ( y ) = 1 ( n - 1 ) Σ i = 1 n ( y i - x ‾ ) ( y i - x ‾ ) T
Wherein, y ifor n ties up i-th observed reading of variable; for the average of observed reading;
Calculate C vthe eigenvalue of maximum λ of (y) v, first utilize SVD diagonalizable matrix C vy (), obtains singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tΣ, the diagonal angle of matrix Λ represents C vthe eigenwert of (y), C vthe eigenvalue of maximum λ of (y) vfor vertical edge intensity, namely
λ v = max ( eigen C v ( y ) )
3. diagonal angle edge strength
The master of image block X becomes image block Z to angular direction pixel groups 1, secondary image block Z is become to angular direction pixel groups 2, be respectively image block Z 1, Z 2in pixel, image block Z 1every a line regard observed reading as, each row is regarded variable as and is calculated the unbiased esti-mator of covariance matrix similarly, image block Z 2every a line regard observed reading as, each row is regarded variable as and is calculated the unbiased esti-mator of covariance matrix z 1and Z 2the unbiased esti-mator of covariance matrix with be calculated as follows:
C D 1 ( z 1 ) = 1 ( n - 1 ) Σ i = 1 n ( z i 1 - z 1 ‾ ) ( z i 1 - z 1 ‾ ) T
C D 2 ( z 2 ) = 1 ( n - 1 ) Σ i = 1 n ( z i 2 - z 2 ‾ ) ( z i 2 - z 2 ‾ ) T
Wherein, z 1 ‾ = 1 n 2 Σ i = 1 n Σ j = 1 n z ij 1 , z 2 ‾ = 1 n 2 Σ i = 1 n Σ j = 1 n z ij 2 ; for n ties up i-th observed reading of variable; for the average of observed reading;
Compute matrix with eigenwert, first utilize SVD diagonalizable matrix obtain singular value matrix Σ, then calculate diagonal angle eigenvalue matrix Λ, Λ=Σ tthe diagonal angle representative of Σ, matrix Λ eigenwert, calculate diagonal matrix eigenvalue of maximum similarly, calculate eigenvalue of maximum with for diagonal angle edge strength is
λ D 1 = max ( eigen C D 1 ( z 1 ) )
λ D 2 = max ( eigen C D 2 ( z 2 ) )
Edge strength S is horizontal edge intensity λ h, vertical edge intensity λ vwith two diagonal angle edge strengths and, namely
S = λ h + λ v + λ D 1 + λ D 2
B) the many strategies based on sigmoid function merge
Adopt the how tactful fusion rule based on sigmoid function to high-frequency sub-band, weighting coefficient ω is calculated by sigmoid function, namely
C j F ( x , y ) = ω ( x , y ) × C j A ( x , y ) + ( 1 - ω ( x , y ) ) × C j B ( x , y )
ω ( x , y ) = ( S j A ( x , y ) / S j B ( x , y ) ) k 1 + ( S j A ( x , y ) / S j B ( x , y ) ) k
Wherein, with represent the high frequency coefficient that source images A, B and fused images F is corresponding at point (x, y) place respectively; represent the edge strength of high-frequency sub-band coefficient; K is contraction factor.
CN201410470345.3A 2014-09-15 2014-09-15 Image interfusion method based on translation invariant shearing wave conversion Active CN104268833B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410470345.3A CN104268833B (en) 2014-09-15 2014-09-15 Image interfusion method based on translation invariant shearing wave conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410470345.3A CN104268833B (en) 2014-09-15 2014-09-15 Image interfusion method based on translation invariant shearing wave conversion

Publications (2)

Publication Number Publication Date
CN104268833A true CN104268833A (en) 2015-01-07
CN104268833B CN104268833B (en) 2018-06-22

Family

ID=52160353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410470345.3A Active CN104268833B (en) 2014-09-15 2014-09-15 Image interfusion method based on translation invariant shearing wave conversion

Country Status (1)

Country Link
CN (1) CN104268833B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899848A (en) * 2015-07-02 2015-09-09 苏州科技学院 Self-adaptive multi-strategy image fusion method based on riemannian metric
CN106056564A (en) * 2016-05-27 2016-10-26 西华大学 Edge sharp image fusion method based on joint thinning model
CN106127719A (en) * 2016-06-20 2016-11-16 中国矿业大学 A kind of novel neutral net Method of Medical Image Fusion
CN106447686A (en) * 2016-09-09 2017-02-22 西北工业大学 Method for detecting image edges based on fast finite shearlet transformation
CN106897987A (en) * 2017-01-18 2017-06-27 江南大学 Image interfusion method based on translation invariant shearing wave and stack own coding
CN106897999A (en) * 2017-02-27 2017-06-27 江南大学 Apple image fusion method based on Scale invariant features transform
CN107610165A (en) * 2017-09-12 2018-01-19 江南大学 The 3 D shearing multi-modal medical image sequence fusion methods of wave zone based on multiple features
CN109685752A (en) * 2019-01-09 2019-04-26 中国科学院长春光学精密机械与物理研究所 A kind of multiple dimensioned Shearlet area image method for amalgamation processing decomposed based on block
CN109685058A (en) * 2017-10-18 2019-04-26 杭州海康威视数字技术股份有限公司 A kind of images steganalysis method, apparatus and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080319375A1 (en) * 2007-06-06 2008-12-25 Biovaluation & Analysis, Inc. Materials, Methods, and Systems for Cavitation-mediated Ultrasonic Drug Delivery in vivo
CN102324021A (en) * 2011-09-05 2012-01-18 电子科技大学 Infrared dim-small target detection method based on shear wave conversion
CN103049895A (en) * 2012-12-17 2013-04-17 华南理工大学 Multimode medical image fusion method based on translation constant shear wave transformation
CN103985109A (en) * 2014-06-05 2014-08-13 电子科技大学 Feature-level medical image fusion method based on 3D (three dimension) shearlet transform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080319375A1 (en) * 2007-06-06 2008-12-25 Biovaluation & Analysis, Inc. Materials, Methods, and Systems for Cavitation-mediated Ultrasonic Drug Delivery in vivo
CN102324021A (en) * 2011-09-05 2012-01-18 电子科技大学 Infrared dim-small target detection method based on shear wave conversion
CN103049895A (en) * 2012-12-17 2013-04-17 华南理工大学 Multimode medical image fusion method based on translation constant shear wave transformation
CN103985109A (en) * 2014-06-05 2014-08-13 电子科技大学 Feature-level medical image fusion method based on 3D (three dimension) shearlet transform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GLENN EASLEY 等: "Sparse directional image representations using the discrete shearlet transform", 《APPLIED COMPUTATIONAL HARMONIC ANALYSIS》 *
刘卫 等: "基于平移不变剪切波变换域图像融合算法", 《光子学报》 *
安富 等: "模糊逻辑与特征差异驱动的红外偏振图像融合模型", 《红外技术》 *
邵宇 等: "基于局部结构张量的无参考型图像质量评价方法", 《电子与信息学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899848A (en) * 2015-07-02 2015-09-09 苏州科技学院 Self-adaptive multi-strategy image fusion method based on riemannian metric
CN106056564A (en) * 2016-05-27 2016-10-26 西华大学 Edge sharp image fusion method based on joint thinning model
CN106056564B (en) * 2016-05-27 2018-10-16 西华大学 Edge clear image interfusion method based on joint sparse model
CN106127719A (en) * 2016-06-20 2016-11-16 中国矿业大学 A kind of novel neutral net Method of Medical Image Fusion
CN106447686A (en) * 2016-09-09 2017-02-22 西北工业大学 Method for detecting image edges based on fast finite shearlet transformation
CN106897987A (en) * 2017-01-18 2017-06-27 江南大学 Image interfusion method based on translation invariant shearing wave and stack own coding
CN106897999A (en) * 2017-02-27 2017-06-27 江南大学 Apple image fusion method based on Scale invariant features transform
CN107610165A (en) * 2017-09-12 2018-01-19 江南大学 The 3 D shearing multi-modal medical image sequence fusion methods of wave zone based on multiple features
CN107610165B (en) * 2017-09-12 2020-10-23 江南大学 Multi-feature-based 3-D shear wave domain multi-modal medical sequence image fusion method
CN109685058A (en) * 2017-10-18 2019-04-26 杭州海康威视数字技术股份有限公司 A kind of images steganalysis method, apparatus and computer equipment
US11347977B2 (en) 2017-10-18 2022-05-31 Hangzhou Hikvision Digital Technology Co., Ltd. Lateral and longitudinal feature based image object recognition method, computer device, and non-transitory computer readable storage medium
CN109685752A (en) * 2019-01-09 2019-04-26 中国科学院长春光学精密机械与物理研究所 A kind of multiple dimensioned Shearlet area image method for amalgamation processing decomposed based on block

Also Published As

Publication number Publication date
CN104268833B (en) 2018-06-22

Similar Documents

Publication Publication Date Title
CN104268833A (en) New image fusion method based on shift invariance shearlet transformation
CN108573276B (en) Change detection method based on high-resolution remote sensing image
Jin et al. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain
CN101551863B (en) Method for extracting roads from remote sensing image based on non-sub-sampled contourlet transform
Li et al. Complex contourlet-CNN for polarimetric SAR image classification
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN104392463B (en) Image salient region detection method based on joint sparse multi-scale fusion
CN103413151B (en) Hyperspectral image classification method based on figure canonical low-rank representation Dimensionality Reduction
CN103927511B (en) image identification method based on difference feature description
CN102629378B (en) Remote sensing image change detection method based on multi-feature fusion
CN105335975B (en) Polarization SAR image segmentation method based on low-rank decomposition and statistics with histogram
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN103093478B (en) Based on the allos image thick edges detection method of quick nuclear space fuzzy clustering
Liang et al. Maximum likelihood classification of soil remote sensing image based on deep learning
Xiao et al. Image Fusion
CN103985109B (en) Feature-level medical image fusion method based on 3D (three dimension) shearlet transform
CN102096913B (en) Multi-strategy image fusion method under compressed sensing framework
Junwu et al. An infrared and visible image fusion algorithm based on LSWT-NSST
CN105512670B (en) Divided based on KECA Feature Dimension Reduction and the HRCT peripheral nerve of cluster
CN103020931B (en) Multisource image fusion method based on direction wavelet domain hidden Markov tree model
CN104361571A (en) Infrared and low-light image fusion method based on marginal information and support degree transformation
Wu et al. Fusing optical and synthetic aperture radar images based on shearlet transform to improve urban impervious surface extraction
Yang et al. Infrared and visible image fusion based on QNSCT and Guided Filter
CN103198456A (en) Remote sensing image fusion method based on directionlet domain hidden Markov tree (HMT) model
CN111275680B (en) SAR image change detection method based on Gabor convolution network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant