CN102968781B - Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation - Google Patents

Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation Download PDF

Info

Publication number
CN102968781B
CN102968781B CN201210529022.8A CN201210529022A CN102968781B CN 102968781 B CN102968781 B CN 102968781B CN 201210529022 A CN201210529022 A CN 201210529022A CN 102968781 B CN102968781 B CN 102968781B
Authority
CN
China
Prior art keywords
image
coefficient
band
alpha
nsct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210529022.8A
Other languages
Chinese (zh)
Other versions
CN102968781A (en
Inventor
彭进业
王珺
何贵青
阎昆
夏召强
冯晓毅
蒋晓悦
吴俊�
李会方
谢红梅
杨雨奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201210529022.8A priority Critical patent/CN102968781B/en
Publication of CN102968781A publication Critical patent/CN102968781A/en
Application granted granted Critical
Publication of CN102968781B publication Critical patent/CN102968781B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides an image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation. According to the image fusion method, a learning dictionary is provided for the coefficients of a low-frequency sub-band lower in sparseness, and the common and special coefficients of a source image are extracted by virtue of spare representation, so that the purpose of improving the sparseness of the low-frequency sub-band can be realized; the weight fusion are horizontally adjusted in a self-adoption manner according to the movement of the special coefficient; and the coefficient of a high-frequency directional sub-band higher in sparseness is fused through absolute value of the directional sub-band with the same dimension by a maximizing method, thus the marked features of the source image can be captured, and as a result, the fusion effect is finally improved.

Description

Based on the image interfusion method of NSCT and rarefaction representation
Technical field
The present invention relates to a kind of image interfusion method.
Background technology
In recent years, based on non-downsampling Contourlet conversion (Non-Subsampled Contourlet Transform, NSCT) with it, there is translation invariant, multiresolution, multi-direction and anisotropic image represent ability, and effectively can overcome the problem that traditional wavelet can not process 2D or more higher-dimension singularity, be used successfully to image co-registration domain variability and obtain preferably syncretizing effect.But in image co-registration problem, we wish that the image representation coefficients extracted has outstanding openness and feature retentivity, thus only need a small amount of coefficient of fusion just can obtain preferably fusion results.But, through NSCT conversion obtain image low frequency sub-band coefficient be approximately zero very limited, the low frequency sub-band information of expression image that namely can not be sparse, if directly merge to it feature being unfavorable for our extraction source image.Consider that low frequency sub-band contains the main energetic of image, determine the quality of fusion results to a great extent, so we wish the degree of rarefication by improving low frequency sub-band coefficient, to obtain more excellent fusion results.
Summary of the invention
Poor in order to overcome the low frequency sub-band coefficient degree of rarefication comprising image main energetic after prior art NSCT changes, be unfavorable for that extraction useful information carries out the deficiency merged, the invention provides a kind of image interfusion method based on NSCT and rarefaction representation, the low frequency sub-band coefficient study dictionary poor to degree of rarefication, rarefaction representation extraction source image is utilized to have and peculiar coefficient, to reach the object improving low frequency sub-band degree of rarefication, then according to the activity level self-adaptative adjustment weight fusion of peculiar coefficient; The high frequency direction sub-band coefficients higher to degree of rarefication to adopt under same yardstick directional subband absolute value and gets maximum method and merge, and to catch the notable feature in source images, finally improves syncretizing effect.
The technical solution adopted for the present invention to solve the technical problems comprises the following steps:
1 training dictionary portion:
Suppose source images through registration, have K width size to be the source images of M × N, and be denoted as I respectively 1..., I k.
(1.1) decompose each width training image with NSCT, through J level NSCT decompose after (J is generally 3 ~ 5 grades of decomposition), obtain 1 low frequency sub-band coefficient with individual high frequency direction sub-band coefficients, wherein l jfor the Directional Decomposition progression under yardstick j.Wherein, training image can be source images itself, also can be the image identical with source images acquisition mode;
(1.2) initialization dictionary D ∈ R n × m, wherein, n is the size of dictionary atom, and m is the atomicity of every sub-dictionary.In order to the complexity of the mistake completeness and calculating that ensure dictionary, usually get n=64, m=256;
(1.3) to low frequency sub-band coefficient with step-length for 1, size is moving window according to extracting block from top left to bottom right, then block is stretching and be arranged in order composition matrix;
(1.4) to above-mentioned matrix K-SVD Algorithm for Training dictionary D, and this dictionary is preserved;
2 image co-registration parts:
(2.1) source images is decomposed according to the method NSCT of step (1.1);
(2.2) according to following 6 steps, source images low frequency sub-band coefficient is merged:
1. according to method in step (1.3), source images low frequency sub-band is arranged in matrix V k, k=1 ..., K;
2. by the matrix V of institute's active image kbe expressed as:
Wherein, α crepresent total rarefaction representation coefficient, it is contained in institute's active image; represent the peculiar rarefaction representation coefficient of kth width image, it is only contained in kth width source images, and 0 represents that size is the full null matrix of n × m, and D is the dictionary that step (1.4) trains.
Order,
Then (1) formula can be reduced to
V=D′α (2)
3. in order to make α in (2) formula the most sparse, orthogonal matching pursuit (OMP) Algorithm for Solving following formula is adopted:
α ^ = arg min α | | α | | 0 , s . t . | | D ′ α - V | | 2 2 ≤ ϵ - - - ( 3 )
That is, α is obtained cwith
4. according to the contribution of institute's active image to fusion, low frequency sub-band coefficient is merged according to the following formula:
α f = α C + Σ i = 1 K ω k α k U , ω k = n k max 1 ≤ i ≤ K ( n k ) - - - ( 4 )
Wherein, n irepresent the activity factor of coefficient, it has reacted the energy size of feature, i.e. significance level:
n i = | | α i U | | 1 - - - ( 5 )
5. fused images low frequency sub-band coefficient can be reconstructed into:
V f=Dα f (6)
6. Ergodic Matrices V f, each row in matrix are arranged in the block of size, then these blocks are put into according to the order extracted correspondence position, and to be averaged, namely the sub-band coefficients of same position to be carried out cumulative and divided by cumulative number of times, thus obtain source images low-frequency subband fusion coefficient
(2.3) according to following 2 steps, high-frequency sub-band coefficient is merged:
1. calculating source images is 2 at yardstick -lon directional subband information, l is 2 at yardstick -lthe direction number of decomposing, rule of thumb, 2≤l≤4:
V l ( n , m ) = Σ l ≤ i ≤ l j | V l , i ( n , m ) | - - - ( 7 )
Wherein, V l,i(n, m) represents at yardstick 2 -l, i direction, directional subband coefficient value on (n, m) location of pixels.
2. to choose under same yardstick directional subband absolute value and get maximum method and merge:
V l , i F ( n , m ) = V l , i k * ( n , m ) , k * = arg max l ≤ k ≤ K V l k ( n , m ) - - - ( 8 )
Wherein, 1≤l≤J, 1≤i≤l j; with represent fused images or kth respectively *width source images is at yardstick 2 -l, i direction, directional subband coefficient value on (n, m) location of pixels; k *represent in K width source images, at the label of the source images of l directional subband information maximization; represent the sub-band information of kth width source images on l direction, can obtain according to the definition of (7) formula.
(2.4) to the low frequency sub-band coefficient after fusion and the high-frequency sub-band coefficient after merging carry out NSCT inverse transformation, obtain final fused images F.
The invention has the beneficial effects as follows:
The present invention effectively can improve the rear image low frequency sub-band degree of rarefication of NSCT conversion: by handwriting practicing allusion quotation of going to school in NSCT territory, rarefaction representation is utilized to solve the rarefaction representation coefficient of low frequency sub-band, significantly improve the rear image low frequency sub-band degree of rarefication of NSCT conversion, be more conducive to the essential characteristic extracting image, to improve image syncretizing effect.
The more single fusion method based on NSCT and rarefaction representation of the present invention all has advantage: the degree of rarefication of more single NSCT fusion method is higher, more effectively extract characteristics of image, the fusion method of more single rarefaction representation has multiple dimensioned, multi-direction analysis ability, more meets eye-observation image mode.So the inventive method all has more excellent syncretizing effect compared with these two large class methods.
Accompanying drawing explanation
Fig. 1 extracts total and characteristic feature legend, wherein, a () is infrared radiation source image, b () is visible light source image, c () is infrared image low frequency sub-band, (d) is visible images low frequency sub-band, and (e) is infrared image low frequency sub-band characteristic feature, f () is visible images low frequency sub-band characteristic feature, (g) is infrared with visible ray low frequency sub-band common characteristic.
Fig. 2 is the fusion results of several method, and wherein, (a) is DWT method, and (b) is NSCT method, and (c) is SOMP method, and (d) is JSR method, and (e) is context of methods.
Fig. 3 is image interfusion method process flow diagram of the present invention.
Embodiment
Based on an image interfusion method for NSCT and rarefaction representation, the method mainly comprises training dictionary and image co-registration two large divisions, is described below:
1 training dictionary portion:
Suppose source images through registration, have K width size to be the source images of M × N, and be denoted as I respectively 1..., I k.
(1.1) decompose each width training image with NSCT, through J level NSCT decompose after, obtain 1 low frequency sub-band coefficient and individual high frequency direction sub-band coefficients, wherein l jfor the Directional Decomposition progression under yardstick j.Wherein, training image can be source images itself, also can be the image identical with source images acquisition mode;
(1.2) initialization dictionary D ∈ R n × m, wherein, n is the size of dictionary atom, and m is the atomicity of every sub-dictionary;
(1.3) to low frequency sub-band coefficient with step-length for 1, size is moving window according to extracting block from top left to bottom right, then block is stretching and be arranged in order composition matrix;
(1.4) to above-mentioned matrix K-SVD Algorithm for Training dictionary D, and this dictionary is preserved;
2 image co-registration parts:
(2.1) source images is decomposed with NSCT in same step (1);
(2.2) according to following 5 steps, source images low frequency sub-band coefficient is merged:
1. according to method in step (3), source images low frequency sub-band is arranged in matrix V k, k=1 ..., K;
2. by the matrix V of institute's active image kbe expressed as:
Wherein, α crepresent total rarefaction representation coefficient, it is contained in institute's active image; it is only contained in a corresponding width source images to represent peculiar rarefaction representation coefficient, and 0 represents that size is the full null matrix of n × m.
Order,
Then (1) formula can be reduced to
V=D′α (2)
3. in order to make α in (2) formula the most sparse, orthogonal matching pursuit (OMP) Algorithm for Solving following formula is adopted:
α ^ = arg min α | | α | | 0 , s . t . | | D ′ α - V | | 2 2 ≤ ϵ - - - ( 3 )
That is, α is obtained cwith
4. according to the contribution of institute's active image to fusion, low frequency sub-band coefficient is merged according to the following formula:
α f = α C + Σ i = 1 K ω k α k U , ω k = n k max 1 ≤ i ≤ K ( n k ) - - - ( 4 )
Wherein, n irepresent that coefficient represents the activity factor of coefficient, it has reacted the energy size of feature, i.e. significance level:
n i = | | α i U | | 1 - - - ( 5 )
5. fused images low frequency sub-band coefficient can be reconstructed into:
V f=Dα f (6)
6. Ergodic Matrices V f, each row in matrix are arranged in the block of size, then these blocks are put into according to the order extracted correspondence position, and to be averaged, namely the sub-band coefficients of same position to be carried out cumulative and divided by cumulative number of times, thus obtain source images low-frequency subband fusion coefficient
(2.3) because the base of NSCT has abundant direction and shape, the notable feature in image can be caught on high frequency direction subband, as edge, linear feature and zone boundary.And these notable features all show larger modulus value on all directional subbands of same yardstick, and other non-significant characteristic coefficient modulus value are almost nil.So, according to following 2 steps, merge high-frequency sub-band coefficient:
1. calculating source images is 2 at yardstick -lon directional subband information:
V l ( n , m ) = Σ l ≤ i ≤ l j | V l , i ( n , m ) | - - - ( 7 )
Wherein, V l,i(n, m) represents at yardstick 2 -l, i direction, the directional subband coefficient value on (n, m) location of pixels.
2. to choose under same yardstick directional subband absolute value and get maximum method and merge:
V l , i F ( n , m ) = V l , i k * ( n , m ) , k * = arg max l ≤ k ≤ K V l k ( n , m ) - - - ( 8 )
Wherein, 1≤l≤J, 1≤i≤l j.
(2.4) to the low frequency sub-band coefficient after fusion and the high-frequency sub-band coefficient after merging carry out NSCT inverse transformation, obtain final fused images F.
Below in conjunction with drawings and Examples, the present invention is further described.
Example 1. utilizes rarefaction representation to improve NSCT low frequency sub-band degree of rarefication and extraction source image is peculiar and common characteristic example
in 1 example, dictionary learning step is as follows:
(1.1) decompose infrared with visible light source image respectively with NSCT, adopt " 9-7 " Pyramid transform and " c-d " directional filter banks, the direction number that high frequency layer is got is followed successively by 2 4, 2 3, 2 2, 2 2;
(1.2) initialization dictionary D ∈ R 64 × 256;
(1.3) to low frequency sub-band coefficient with step-length for 1, size be the moving window of 8 × 8 according to extracting block from top left to bottom right, then block is stretching and be arranged in order composition matrix, infrared low frequency sub-band matrix is designated as V 1; Visible ray low frequency sub-band matrix is designated as V 2;
(1.4) to above-mentioned matrix K-SVD Algorithm for Training dictionary D, K-SVD algorithm permissible error ε gets conventional numerical value 0.01, and preserves this dictionary;
to low frequency sub-band rarefaction representation and extraction source image is peculiar and common characteristic step is as follows in 2 examples:
(2.1) by infrared with the low frequency sub-band matrix representation of visible light source image be:
V 1 V 2 = D D 0 D 0 D α C α 1 U α 2 U - - - ( 9 )
Wherein, α crepresent total rarefaction representation coefficient, it is contained in institute's active image; with represent peculiar rarefaction representation coefficient, be only contained in infrared radiation source image and visible light source image respectively, 0 represents that size is the full null matrix of 64 × 256.
Order, V = V 1 V 2 , α = α C α 1 U α 2 U , D ′ D D 0 D 0 D ,
Then (4) formula can be reduced to
V=D′α (10)
(2.2) OMP Algorithm for Solving following formula is adopted:
α ^ = arg min α | | α | | 0 , s . t . | | D ′ α - V | | 2 2 ≤ ϵ - - - ( 11 )
That is, α is obtained c, and
(2.3) source images common characteristic matrix V IC and the infrared image characteristic feature matrix of extraction is tried to achieve respectively with visible images characteristic feature matrix
VI C = D × α C , VI 1 U = D × α 1 U , VI 2 U = D × α 2 U - - - ( 12 )
(2.4) difference Ergodic Matrices with each row in matrix are arranged in the block of 8 × 8 sizes, then these blocks are put into I respectively according to the order extracted c, correspondence position, and to be averaged, namely the sub-band coefficients of same position to be carried out cumulative and divided by cumulative number of times, thus obtain infraredly having low frequency sub-band coefficient image I with visible images c, infrared image and visible images be peculiar low frequency sub-band coefficient image separately with
Fig. 1 (e)-(g) is the image reconstructed again after the peculiar and common characteristic of extraction source image low frequency sub-band coefficient respectively.The more black partial pixel value of image is more close to 0, and null value is more sparse.Can find out, original infrared image and visible images degree of rarefication poor, there is no the essential characteristic of extraction source image.And degree of rarefication of the present invention significantly improves, and to extract in infrared low frequency sub-band the feature such as distinctive roof road in distinctive personage and trees profile and visible ray low frequency sub-band, and their feature such as building wall and road of having, be more beneficial to the subsequent treatment such as fusion.
Example 2. image co-registration example of the present invention
The method that invention is proposed and tradition based on the image interfusion method of DWT and current performance comparatively superior based on NSCT image interfusion method and based on the image interfusion method SOMP of rarefaction representation and JSR Measures compare.First two method is the method based on transform domain, and latter two is the fusion method based on image area rarefaction representation.Adopt 240 × 320 sizes in experiment and pass through the infrared and visible images aimed at, the wavelet type that wherein DWT decomposes is 3 grades of db4 small echos, NSCT optimum configurations is identical with document, i.e. " 9-7 " Pyramid transform and " c-d " directional filter banks, and the direction number that high frequency layer is got is followed successively by 2 4, 2 3, 2 2, 2 2.The dictionary size of rarefaction representation is 64 × 256, ε=0.01.
image interfusion method performing step based on NSCT and rarefaction representation in this example is as follows:
dictionary learning step is identical with step in example 1, and image co-registration part steps is:
(1) decompose infrared with visible light source image respectively with NSCT, it is consistent with study part that NSCT converts parameter, i.e. employing " 9-7 " Pyramid transform and " c-d " directional filter banks, and the direction number that high frequency layer is got is followed successively by 2 4, 2 3, 2 2, 2 2;
(2) source images low frequency sub-band coefficient is merged:
1. source is obtained by 2.1 and 2.2 steps in same example 1 infrared with visible ray common characteristic factor alpha c, source infrared image characteristic feature coefficient and visible images characteristic feature coefficient
2. low frequency sub-band coefficient is merged according to the following formula:
α f = α C + Σ i = 1 2 ω k α k U , ω k = n k max 1 ≤ i ≤ 2 ( n k ) - - - ( 13 )
Wherein, n irepresent that coefficient represents the activity factor of coefficient, it has reacted the energy size of feature, i.e. significance level:
n i = | | α i U | | 1 - - - ( 14 )
5. fused images low frequency sub-band coefficient can be reconstructed into:
V f=Dα f (15)
6. Ergodic Matrices V f, each row in matrix are arranged in the block of 8 × 8 sizes, then these blocks are put into according to the order extracted correspondence position, and to be averaged, namely the sub-band coefficients of same position to be carried out cumulative and divided by cumulative number of times, thus obtain source images low-frequency subband fusion coefficient
(3) high-frequency sub-band coefficient is merged:
1. solving source images at yardstick is 2 -lon directional subband information:
V l ( n , m ) = Σ l ≤ i ≤ l j | V l , i ( n , m ) | - - - ( 16 )
Wherein, 1≤l≤4,1≤i≤l j(l 1=2 4, l 2=2 3, l 3=2 2, l 4=2 2).
2. to choose under same yardstick directional subband absolute value and get maximum method and merge:
V l , i F ( n , m ) = V l , i k * ( n , m ) , k * = arg max l ≤ k ≤ 2 V l k ( n , m ) - - - ( 17 )
(4) to the low frequency sub-band coefficient after fusion and the high-frequency sub-band coefficient after merging carry out NSCT inverse transformation, obtain final fused images F.
Fusion results as shown in Figure 2, as seen from Figure 2, the house of DWT fusion results and surrounding scenes contrast reduction; In NSCT fusion results, still texture is unintelligible comparatively by force for trees road stereovision, and personage and house are also outstanding not; SOMP and JSR personage is comparatively outstanding, but obtains too level and smooth to the obvious area merges of the textural characteristics such as leaf; The atural objects such as railing, house, leaf and road then merge more clear by context of methods, and personage is also comparatively obvious, and continuity is good, and visual effect is best.
In order to the different fusion method of quantitative evaluation is used for performance that is infrared and visual image fusion, adopt index root mean square cross entropy RCE (Rooted Cross Entropy), Q in comparative approach herein w, Q eand Q abfevaluate.Wherein, index RCE is used for evaluating the comprehensive differences between fused images and source images, the smaller the better; Q wthe fusion mass evaluation of source images and fused images window weight, Q eand Q abfrespectively from local and the overall situation reflecting fused images fusion source images edge, Q 0, Q w, Q abfvalue all between [0,1], more show that fusion mass is better close to 1.
The performance index of several fusion method of table 1
Table 1 is the performance index (wherein runic represents optimum desired value) of several fusion method, the data of observation table 1 can be found out, relative to the method directly merged at domain of variation (DWT and NSCT) and in the method (SOMP and JSR) of image area single scale based on rarefaction representation, not only method in this paper can be carried out multiscale analysis to source images but also can be improved the degree of rarefication of image representation coefficients further, enhance the details expressive ability of fused images, extract more useful information from source images and merged, so have more excellent syncretizing effect.

Claims (5)

1., based on an image interfusion method for NSCT and rarefaction representation, it is characterized in that comprising the steps:
Suppose source images through registration, have K width size to be the source images of M × N, and be denoted as I respectively 1..., I k;
(1.1) decompose each width training image with NSCT, through J level NSCT decompose after, obtain 1 low frequency sub-band coefficient and individual high frequency direction sub-band coefficients, wherein l jfor the Directional Decomposition progression under yardstick j;
(1.2) initialization dictionary D ∈ R n × m, wherein, n is the size of dictionary atom, and m is the atomicity of every sub-dictionary;
(1.3) to low frequency sub-band coefficient with step-length for 1, size is moving window according to extracting block from top left to bottom right, then block is stretching and be arranged in order composition matrix;
(1.4) to above-mentioned matrix K-SVD Algorithm for Training dictionary D, and this dictionary is preserved;
(2.1) source images is decomposed according to the method NSCT of step (1.1);
(2.2) according to following 6 steps, source images low frequency sub-band coefficient is merged:
1. according to method in step (1.3), source images low frequency sub-band is arranged in matrix V k, k=1 ..., K;
2. by the matrix V of institute's active image kbe expressed as:
Wherein, α crepresent total rarefaction representation coefficient, it is contained in institute's active image; represent the peculiar rarefaction representation coefficient of kth width image, it is only contained in kth width source images, and 0 represents that size is the full null matrix of n × m;
Order, V = V 1 · · · V K , α = α C α 1 U · · · α K U ,
Then (1) formula can be reduced to
V=D′α (2)
3. in order to make α in (2) formula the most sparse, orthogonal matching pursuit algorithm is adopted to solve following formula:
α ^ = arg min α | | α | | 0 , s . t . | | D ′ α - V | | 2 2 ≤ ϵ - - - ( 3 )
That is, α is obtained cwith
4. according to the contribution of institute's active image to fusion, low frequency sub-band coefficient is merged according to the following formula:
α f = α C + Σ i = 1 K ω k α k U , ω k = n k max 1 ≤ i ≤ K ( n k ) - - - ( 4 )
Wherein, n irepresent the activity factor of coefficient, it has reacted the energy size of feature, i.e. significance level:
n i = | | α i U | | 1 - - - ( 5 )
5. fused images low frequency sub-band coefficient can be reconstructed into:
V f=Dα f (6)
6. Ergodic Matrices V f, each row in matrix are arranged in the block of size, then these blocks are put into according to the order extracted correspondence position, and to be averaged, namely the sub-band coefficients of same position to be carried out cumulative and divided by cumulative number of times, thus obtain source images low-frequency subband fusion coefficient
(2.3) according to following 2 steps, high-frequency sub-band coefficient is merged:
1. calculating source images is 2 at yardstick -lon directional subband information, l is 2 at yardstick -lthe direction number of decomposing:
V l ( n , m ) = Σ 1 ≤ i ≤ l j | V l , i ( n , m ) | - - - ( 7 )
Wherein, V l,i(n, m) represents at yardstick 2 -l, i direction, directional subband coefficient value on (n, m) location of pixels;
2. to choose under same yardstick directional subband absolute value and get maximum method and merge:
V 1 , i F ( n , m ) = V 1 , i k * ( n , m ) , k * = arg max 1 ≤ k ≤ K V 1 k ( n , m ) - - - ( 8 )
Wherein, 1≤l≤J, 1≤i≤l j; with represent fused images or kth respectively *width source images is at yardstick 2 -l, i direction, directional subband coefficient value on (n, m) location of pixels; k *represent in K width source images, at the label of the source images of l directional subband information maximization; represent the sub-band information of kth width source images on l direction;
(2.4) to the low frequency sub-band coefficient after fusion and the high-frequency sub-band coefficient after merging carry out NSCT inverse transformation, obtain final fused images F.
2. the image interfusion method based on NSCT and rarefaction representation according to claim 1, is characterized in that: described J is taken as 3 ~ 5.
3. the image interfusion method based on NSCT and rarefaction representation according to claim 1, is characterized in that: described training image is source images itself or the image identical with source images acquisition mode.
4. the image interfusion method based on NSCT and rarefaction representation according to claim 1, is characterized in that: described n=64, m=256.
5. the image interfusion method based on NSCT and rarefaction representation according to claim 1, is characterized in that: described l span is 2≤l≤4.
CN201210529022.8A 2012-12-11 2012-12-11 Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation Expired - Fee Related CN102968781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210529022.8A CN102968781B (en) 2012-12-11 2012-12-11 Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210529022.8A CN102968781B (en) 2012-12-11 2012-12-11 Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation

Publications (2)

Publication Number Publication Date
CN102968781A CN102968781A (en) 2013-03-13
CN102968781B true CN102968781B (en) 2015-01-28

Family

ID=47798907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210529022.8A Expired - Fee Related CN102968781B (en) 2012-12-11 2012-12-11 Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation

Country Status (1)

Country Link
CN (1) CN102968781B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091343B (en) * 2014-07-22 2017-01-18 西北工业大学 image quality evaluation method based on sparse structure
CN104200451B (en) * 2014-08-28 2017-05-10 西北工业大学 Image fusion method based on non-local sparse K-SVD algorithm
CN105894476B (en) * 2016-04-21 2018-07-27 重庆大学 SAR image noise reduction process method based on dictionary learning fusion
CN106056564B (en) * 2016-05-27 2018-10-16 西华大学 Edge clear image interfusion method based on joint sparse model
CN106709896A (en) * 2016-12-08 2017-05-24 江苏大学 Medical image fusion method
CN106683066A (en) * 2017-01-13 2017-05-17 西华大学 Image fusion method based on joint sparse model
CN106886977B (en) * 2017-02-08 2021-02-05 徐州工程学院 Multi-image automatic registration and fusion splicing method
CN106981058A (en) * 2017-03-29 2017-07-25 武汉大学 A kind of optics based on sparse dictionary and infrared image fusion method and system
CN108898568B (en) * 2018-04-25 2021-08-31 西北大学 Image synthesis method and device
CN108765249B (en) * 2018-05-18 2022-01-07 西安理工大学 Color watermarking method based on combination of directional anisotropy and singular value decomposition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303764A (en) * 2008-05-16 2008-11-12 西安电子科技大学 Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303764A (en) * 2008-05-16 2008-11-12 西安电子科技大学 Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Infrared and Visible Image Fusion based on NSCT and Fuzzy Logic》;Songfeng Yin et al;《Proceeding of the 2010 IEEE International Conference on Mechatronics and Automation》;20100807;671-675 *
《基于NSCT变换的图像融合及鲁棒性分析》;邢素霞等;《红外技术》;20110131;第33卷(第1期);45-55 *

Also Published As

Publication number Publication date
CN102968781A (en) 2013-03-13

Similar Documents

Publication Publication Date Title
CN102968781B (en) Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation
CN107341786A (en) The infrared and visible light image fusion method that wavelet transformation represents with joint sparse
CN107657217A (en) The fusion method of infrared and visible light video based on moving object detection
CN101344913B (en) Method for carrying out identity recognition by extracting iris texture features
CN101493935B (en) Synthetic aperture radar image segmentation method based on shear wave hidden Markov model
Wang A multi-scale approach for delineating individual tree crowns with very high resolution imagery
CN104091151A (en) Vehicle identification method based on Gabor feature extraction and sparse representation
CN105335975B (en) Polarization SAR image segmentation method based on low-rank decomposition and statistics with histogram
CN106981058A (en) A kind of optics based on sparse dictionary and infrared image fusion method and system
CN104700379B (en) A kind of remote sensing image fusion method based on Multiscale Morphological constituent analysis
CN104021537A (en) Infrared and visible image fusion method based on sparse representation
CN103839243B (en) Multi-channel satellite cloud picture fusion method based on Shearlet conversion
CN102063713A (en) Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN103700089B (en) Extracting and sorting method of multi-scale isomeric features of three-dimensional medical image
CN103700075A (en) Tetrolet transform-based multichannel satellite cloud picture fusing method
CN102063715A (en) Method for fusing typhoon cloud pictures based on NSCT (Nonsubsampled Controurlet Transformation) and particle swarm optimization algorithm
CN102682306B (en) Wavelet pyramid polarization texture primitive feature extracting method for synthetic aperture radar (SAR) images
CN104008537A (en) Novel noise image fusion method based on CS-CT-CHMM
CN110570387B (en) Image fusion method based on feature level Copula model similarity
CN109919242A (en) A kind of images steganalysis method based on depth characteristic and joint sparse
CN104850859A (en) Multi-scale analysis based image feature bag constructing method
CN102855616A (en) Image fusion method based on multi-scale dictionary learning
CN102176014A (en) Method for detecting urban region change based on multi-temporal SAR (synthetic aperture radar) images
CN104102900A (en) Vehicle identification system
CN104680182B (en) Polarization SAR sorting technique based on NSCT and differentiation dictionary learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150128

Termination date: 20151211

EXPY Termination of patent right or utility model