CN104463822A - Multi-focus image fusing method and device based on multi-scale overall filtering - Google Patents

Multi-focus image fusing method and device based on multi-scale overall filtering Download PDF

Info

Publication number
CN104463822A
CN104463822A CN201410763858.3A CN201410763858A CN104463822A CN 104463822 A CN104463822 A CN 104463822A CN 201410763858 A CN201410763858 A CN 201410763858A CN 104463822 A CN104463822 A CN 104463822A
Authority
CN
China
Prior art keywords
images
image
multiple dimensioned
global filtering
band
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410763858.3A
Other languages
Chinese (zh)
Other versions
CN104463822B (en
Inventor
尹伟科
延翔
秦翰林
周慧鑫
李佳
宗靖国
马琳
韩姣姣
曾庆杰
吕恩龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410763858.3A priority Critical patent/CN104463822B/en
Publication of CN104463822A publication Critical patent/CN104463822A/en
Application granted granted Critical
Publication of CN104463822B publication Critical patent/CN104463822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a multi-focus image fusing method based on multi-scale overall filtering. Multi-scale decomposition is carried out on a plurality of multi-focus images to be fused according to the multi-scale overall filtering, a plurality of multi-scale sub-band images of the obtained multi-focus images are fused and decomposed according to a fusion rule, and inverse multi-scale overall filtering is carried out on the fused multi-scale sub-band images to obtain a final fused image. The invention further discloses a multi-focus image fusing device based on multi-scale overall filtering, the definition, the information amount and the like of the fused image are improved, and the fused image with the higher image quality can be obtained.

Description

Based on multi-focus image fusing method and the device thereof of multiple dimensioned global filtering
Technical field
The invention belongs to image co-registration processing technology field, be specifically related to a kind of multi-focus image fusing method based on multiple dimensioned global filtering and device thereof.
Background technology
Image co-registration has great significance in graphical analysis and computer vision.The image organic assembling obtaining Same Scene from different imaging sensor can be become piece image by image fusion technology, its can effectively complementary different imaging sensor obtain the advantage of image, forming a width can the image of true clear reflection objective scene, to analyze further image, understand and the detection and indentification etc. of target.From the 1980s, Multi-sensor Image Fusion has caused interest and research boom widely.Wherein, multi-focus image fusion is an important research direction, has a wide range of applications in machine learning, remote sensing, computer vision, Medical Image Processing and Military Application.Through the development of nearly 30 years, image fusion technology reached a certain scale, also all developed many emerging systems for different field both at home and abroad, but this does not also mean that image fusion technology is quite ripe.From research conditions current both at home and abroad, image fusion technology all has problem to be solved at theoretical and technical elements.Compared with abroad, domesticly carry out image co-registration research work and start late, although domestic research in recent years in image co-registration achieves larger achievement; But, compare the state being still in and comparatively falling behind abroad.Therefore, we carry out basic theory extensively and profoundly and basic technology research to image co-registration is badly in need of.
Along with the fast development of infotech, in practical application, the demand of people to quantity of information is increasing.Under these conditions, typical image interfusion method, as the image interfusion method based on multiresolution analysis, see document " Image sequence fusion using a shift-invariant wavelet transform ", ImageProcessing, 1997.Proceedings., International Conference on.IEEE, 1997,3:288-291, because wavelet transformation can not catch edge and the texture information of image well, and, the simple absolute coefficient of the method gets large fusion rule, and the fused images effect obtained is undesirable, see document " Feature level fusion ofmultimodal medical images in lifting wavelet transformdomain ", Engineering in Medicine and Biology Society, 2004.IEMBS'04.26thAnnual International Conference ofthe IEEE.IEEE, 2004, 1:1479-1482, the method is by calculating the gradient of wavelet conversion coefficient, and determine fusion coefficients by the size of the difference of wavelet conversion coefficient gradient and the threshold value of setting that compare two width images, although the fused images effect that the method obtains makes moderate progress, but, syncretizing effect is still not ideal enough.In recent years, some scholars also been proposed the method adopting image filtering theory to merge image, see document " The multiscale directional bilateral filterand its application to multisensor image fusion ", Information Fusion, 2012,13 (3): 196-206. the method add directivity effectively to extract image information by multiple dimensioned bilateral filtering, obtain good syncretizing effect, there is higher spatial resolution and contrast; See article " Image fusionbased on pixel significance using cross bilateral filter ", Signal, Image and VideoProcessing, 2013:1-12. the filtering core of the method to bilateral filtering exchanges, employing field window statistical property obtains and merges weight, adopt weighted average method to obtain fused images, its quantity of information, contrast, spatial resolution etc. all increase; But detailed information, sharpness etc. are still difficult to meet request for utilization, the whole structure of fused images or not fully up to expectations.
Summary of the invention
For solving the technical matters of existing existence, the embodiment of the present invention provides a kind of multi-focus image fusing method based on multiple dimensioned global filtering and device thereof.
For achieving the above object, the technical scheme of the embodiment of the present invention is achieved in that
The embodiment of the present invention provides a kind of multi-focus image fusing method based on multiple dimensioned global filtering, this fusion method is: carry out multi-resolution decomposition to some width multiple focussing images to be fused respectively according to multiple dimensioned global filtering, merge the multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained according to fusion rule, carry out obtaining final fused images against multiple dimensioned global filtering to the multiple dimensioned sub-band images after merging.
In such scheme, describedly respectively multi-resolution decomposition is carried out to some width multiple focussing images to be fused according to multiple dimensioned global filtering and be: carry out multiple dimensioned global filtering decomposition according to the two width multiple focussing images of multiple dimensioned global filtering to input, obtain the low frequency sub-band image of two width multiple focussing images and high-frequency sub-band images wherein, K=5, K represent Decomposition order, and m represents the m layer of decomposition, and A represents the first width multiple focussing image, and B represents the second width multiple focussing image.
In such scheme, the described multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained that merges according to fusion rule is: to the low frequency sub-band coefficient of the two width multiple focussing images decomposed through multiple dimensioned global filtering merge, and to high-frequency sub-band images { H m A , m = 1,2 , . . . , K } , { H m B , m = 1,2 , . . . , K } Merge.
The embodiment of the present invention also provides a kind of multi-focus image fusion device based on multiple dimensioned global filtering, and this device comprises resolving cell, integrated unit, inverse multiple dimensioned global filtering unit,
Described resolving cell, for carrying out multi-resolution decomposition to some width multiple focussing images to be fused respectively according to multiple dimensioned global filtering;
Described integrated unit, for merging the multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained according to fusion rule;
Described inverse multiple dimensioned global filtering unit, obtains final fused images for carrying out inverse multiple dimensioned global filtering to the multiple dimensioned sub-band images after fusion.
In such scheme, described resolving cell, specifically for carrying out multiple dimensioned global filtering decomposition according to the two width multiple focussing images of multiple dimensioned global filtering to input, obtains the low frequency sub-band image of two width multiple focussing images and high-frequency sub-band images { H m A , m = 1,2 , . . . , K } , { H m B , m = 1,2 , . . . , K } , Wherein, K=5, K represent Decomposition order, and m represents the m layer of decomposition, and A represents the first width multiple focussing image, and B represents the second width multiple focussing image.
In such scheme, described integrated unit, specifically for the low frequency sub-band coefficient to the two width multiple focussing images decomposed through multiple dimensioned global filtering merge, and to high-frequency sub-band images { H m A , m = 1,2 , . . . , K } , { H m B , m = 1,2 , . . . , K } Merge.
Compared with prior art, the present invention carries out multi-resolution decomposition to some width multiple focussing images to be fused respectively according to multiple dimensioned global filtering, merge the multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained according to fusion rule, carry out obtaining final fused images against multiple dimensioned global filtering to the multiple dimensioned sub-band images after merging; The present invention, by carrying out repeatedly filtering to multiple focussing image, obtains the high-frequency sub-band images on low frequency sub-band image and different scale; It has taken into full account the Space Consistency of image in decomposable process.Adopt top-hat conversion and higher order statistical characteristic to obtain the fusion weight of low frequency sub-band image and high-frequency sub-band images respectively respectively, the method increase the sharpness of fused images, quantity of information etc. to obtain the better fused images of picture quality; The important information that multiscalization can catch image is better carried out to global filtering.
Accompanying drawing explanation
Fig. 1 is overall flow figure of the present invention;
Fig. 2 is the source images of two groups of multiple focussing images that the present invention uses;
Fig. 3 is the result figure that the present invention and existing four kinds of image interfusion methods merge multi-focus bottle image;
Fig. 4 is the result figure that the present invention and existing four kinds of image interfusion methods merge multi-focus pepsi image;
Fig. 5 is that device of the present invention connects block diagram.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.
The invention provides a kind of multi-focus image fusing method based on multiple dimensioned global filtering, respectively multi-resolution decomposition is carried out to some width multiple focussing images to be fused according to multiple dimensioned global filtering, merge the multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained according to fusion rule, carry out obtaining final fused images against multiple dimensioned global filtering to the multiple dimensioned sub-band images after merging.
The embodiment of the present invention provides a kind of multi-focus image fusing method based on multiple dimensioned global filtering, and as shown in Figure 1, the method is realized by following steps:
Step 101: respectively multi-resolution decomposition is carried out to some width multiple focussing images to be fused according to multiple dimensioned global filtering.
Concrete, carry out multiple dimensioned global filtering decomposition according to the two width multiple focussing images of multiple dimensioned global filtering to input, obtain the low frequency sub-band image of two width multiple focussing images and high-frequency sub-band images wherein, K=5, K represent Decomposition order, and m represents the m layer of decomposition, and A represents the first width multiple focussing image, and B represents the second width multiple focussing image.
Described according to multiple dimensioned global filtering to input two width multiple focussing images carry out multiple dimensioned global filtering decomposition, obtain the low frequency sub-band image of two width multiple focussing images and high-frequency sub-band images { H m A , m = 1,2 , . . . , K } , { H m B , m = 1,2 , . . . , K } , Be specially:
For image A and image B, formula (1) and formula (2) is utilized to obtain filtering image sequence:
A m + 1 = W m A A m - - - ( 1 )
B m + 1 = W m B B m - - - ( 2 )
Wherein, when m gets 1, i.e. A 1with B 1represent original image A and B respectively, the wave filter of global filtering W m A = w m , 1 T w m , 2 T · · · w m , n T , W m B = λ m , 1 T λ m , 2 T · · · λ m , n T
w i = 1 Σ j = 1 n Y ij [ Y i 1 , Y i 2 , . . . , Y in ] T - - - ( 3 )
λ i = 1 Σ j = 1 n Z ij [ Z i 1 , Z i 2 , . . . , Z in ] T - - - ( 4 )
Wherein, [Y i1, Y i2..., Y in] and [Z i1, Z i2..., Z in] representing i-th row of symmetrical Gaussian nuclear matrix Y and Z respectively, n represents the n-th pixel of image to be filtered.
Obtain the low frequency sub-band coefficient of image A and B, through type (5) and formula (6) just can obtain:
L K A = A K + 1 - - - ( 5 )
L K B = B K + 1 - - - ( 6 )
Obtain the high-frequency sub-band coefficient of image A and B, the difference between adjacent yardstick approximate image illustrates the detail pictures of high-frequency sub-band:
H m A = A m + 1 - A m - - - ( 7 )
H m B = B m + 1 - B m - - - ( 8 )
Wherein, m=1,2 ..., K.
Step 102: merge the multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained according to fusion rule.
Concrete, to the low frequency sub-band coefficient of the two width multiple focussing images decomposed through multiple dimensioned global filtering merge, and to high-frequency sub-band images { H m A , m = 1,2 , . . . , K } , { H m B , m = 1,2 , . . . , K } Merge.
Due to, low frequency sub-band coefficient is the approximate of image, that reflects the energy distribution of original image.Therefore, the primary information resource of image co-registration, in low frequency sub-band coefficient, directly affects the quality of final fusion results on the quality of the selection of the fusion rule of low frequency sub-band image.The present invention obtains the fusion weight of low frequency sub-band coefficient by carrying out top-hat conversion to former multiple focussing image and then merges low frequency sub-band coefficient, to improve the quality of image co-registration.
The low frequency sub-band coefficient of the described two width multiple focussing images to decomposing through multiple dimensioned global filtering merge, be specially:
Step 201: respectively top-hat conversion is carried out to former multiple focussing image A and B, obtains changing image A *with B *.If utilize the gray scale of structural element U to image f (f represents A and B) to expand, corrode and be expressed as:
[ f⊕U ] ( p , q ) = max { f ( p - a , q - b ) + U ( a , b ) } - - - ( 9 )
[fΘU](p,q)=min{f(p+a,q+b)-U(a,b)} (10)
Wherein, (p, q) is the coordinate of image pixel, and (a, b) is the coordinate of pixel in structural element, representing the dilation and erosion operational symbol of mathematical morphology respectively with Θ, is 3 × 3 sized by the structural element chosen.
After defining dilation and erosion computing, opening operation and closed operation can be carried out by structural element U to image f (f represents A and B), specifically be expressed as follows:
f · U = ( f ⊕ U ) ΘU - - - ( 12 )
Wherein, o with to represent that the opening operation of mathematical morphology accord with and closed operation accords with respectively.
Step 202: obtain low frequency sub-band coefficient with fusion weight, on the basis of the opening operation and closed operation that define mathematical morphology, utilize structural element U to image A and B carry out top-hat conversion just obtain low frequency sub-band coefficient with fusion weight with
Step 203: low frequency sub-band coefficient with fusion:
L K ( p , q ) = L K A ( p , q ) + L K B ( p , q ) 2 if ω K A ( p , q ) = ω K B ( p , q ) = 0 ω K A ( p , q ) X K A ( p , q ) + ω K B ( p , q ) X K B ( p , q ) else - - - ( 15 )
Due to the detailed information of the high-frequency sub-band coefficient representative image of image, therefore, the detailed information of fused images is mainly derived from the high-frequency sub-band coefficient of source images, and the quality choosing high-frequency sub-band coefficient fusion rule directly will affect the degree of enriching of final fused images detailed information and the sharpness of image.The present invention, by the eight rank degree of correlation CF adopting higher order statistical characteristic to obtain the high-frequency sub-band coefficient of source images, determines the fusion rule of final high-frequency sub-band coefficient.
To decomposing the high-frequency sub-band coefficient obtained through multiple dimensioned global filtering with merge, be specially:
Step 301: solve degree of correlation CF:
CF A , B = 1 P × Q Σ p P Σ q Q ( I 1 ( p , q ) - μ I 1 ) 4 ( I 2 ( p , q ) - μ I 3 ) 4 ( Σ p P Σ q Q ( I 1 ( p , q ) - μ I 1 ) 8 ) ( Σ p P Σ q Q ( I 2 ( p , q ) - μ I 2 ) 8 ) - - - ( 16 )
Wherein, I 1with I 2represent respectively with at the gray-scale value of P × Q local neighborhood, with represent respectively with average, (p, q) represents the location of pixels of high-frequency sub-band coefficient, and P × Q represents the size of neighborhood window, and P × Q gets 3 × 3 in the present invention.
Step 302: high-frequency sub-band coefficient { H m A , m = 1,2 , . . . , K } With { H m B , m = 1,2 , . . . , K } Fusion:
If during degree of correlation CF (p, q) > Th, show that the high-frequency sub-band coefficient of two width images is higher in the correlativity at location of pixels (p, q) place, the high-frequency sub-band coefficient of fused images is now expressed as:
H m ( p , q ) = H m A ( p , q ) , if | H m A ( p , q ) | > | H m B ( p , q ) | H m B ( p , q ) , else - - - ( 17 )
If degree of correlation CF is (p, q), during < Th, show that the high-frequency sub-band coefficient of two width images is lower in the correlativity at location of pixels (p, q) place, in order to preserve the high-frequency information of two width multiple focussing images, the high-frequency sub-band coefficient of fused images is now expressed as:
H m ( p , q ) = H m A ( p , q ) + H m B ( p , q ) - - - ( 18 )
Wherein, with H m(p, q) represents the high-frequency sub-band coefficient of two width multiple focussing image A, B and fused images F at location of pixels (p, q) place respectively, and Th is default threshold value, gets 0.9 in the present invention.
Step 103: carry out obtaining final fused images against multiple dimensioned global filtering to the multiple dimensioned sub-band images after merging.
Concrete, in order to obtain final fused images F, need to the fusion low frequency sub-band coefficient L obtained in step 102 k(p, q) and high-frequency sub-band coefficient H m(p, q) be reconstructed, reconstructing method of the present invention carries out inverse multiple dimensioned global filtering to the multiple dimensioned sub-band coefficients merged, namely to the low frequency sub-band coefficient merged and high-frequency sub-band coefficient be added just fused images F finally, specifically such as formula shown in (19):
F(p,q)=L K(p,q)+H m(p,q) (19)
Effect of the present invention can be illustrated by emulation experiment:
1. experiment condition
The view data adopted in experiment is the multiple focussing image of two groups of registrations, size is 128 × 128, and first group is bottle image, as Fig. 2 (a) and Fig. 2 (b), second group is pepsi image, as Fig. 2 (c) and Fig. 2 (d).
2. experiment content
Experiment 1, by method of the present invention and existing four kinds of fusion methods, fusion experiment is carried out to bottle image, fusion results is as Fig. 3, wherein Fig. 3 (a) is document " Image sequence fusion using ashift-invariantwavelet transform ", Image Processing, 1997.Proceedings., International Conference on.IEEE, 1997, 3:288-291. Fig. 3 (b) is document " Feature levelfusion ofmultimodal medical images in liftingwavelettransform domain ", Engineering inMedicine andBiology Society, 2004.IEMBS'04.26thAnnualInternational Conference ofthe IEEE.IEEE, 2004, 1:1479-1482. Fig. 3 (c) is document " Image fusion based on pixel significance using cross bilateral filter ", Signal, Image and Video Processing, 2013:1-12. Fig. 3 (d) is " The multiscale directionalbilateral filter and its application to multisensor image fusion.Information Fusion ", Information Fusion, 2012, 13 (3): 196-206. Fig. 3 (e) are image co-registration result figure of the present invention.
As seen from Figure 3, fusion method of the present invention is compared with existing four kinds of fusion methods, visual effect is better, letter on two bottles and gear all more clear, document " Image sequence fusion using ashift-invariant wavelet transform ", Image Processing, 1997.Proceedings., International Conference on.IEEE, 1997, 3:288-291, document " Feature level fusion ofmultimodal medical images in lifting wavelet transform domain ", Engineering inMedicine and Biology Society, 2004.IEMBS'04.26thAnnual InternationalConference ofthe IEEE.IEEE, 2004, 1:1479-1482 and document " Image fusion based onpixel significance using cross bilateral filter ", Signal, Image and Video Processing, the fusion method of 2013:1-12. and document " The multiscale directional bilateral filter and itsapplication to multisensor image fusion.Information Fusion ", Information Fusion, 2012, the fusion results of the fusion method of 13 (3): 196-206. is compared with the inventive method, sharpness is lower, detailed information amount is less.
Experiment 2, by method of the present invention and existing four kinds of fusion methods, fusion experiment is carried out to pepsi image, fusion results is as Fig. 4, wherein Fig. 4 (a) is document " Image sequence fusion using a shift-invariantwavelet transform ", Image Processing, 1997.Proceedings.International Conferenceon.IEEE, 1997, 3:288-291. Fig. 4 (b) is document " Feature level fusion ofmultimodalmedical images in liftingwavelet transform domain ", Engineering in Medicine andBiology Society, 2004.IEMBS'04.26thAnnual International Conference oftheIEEE.IEEE, 2004, 1:1479-1482. Fig. 4 (c) is document " Image fusion based on pixelsignificance using cross bilateral filter ", Signal, Image and Video Processing, 2013:1-12. Fig. 4 (d) is document " The multiscale directional bilateral filter and its application tomultisensor image fusion.Information Fusion ", Information Fusion, 2012, 13 (3): 196-206. Fig. 4 (e) are image co-registration result figure of the present invention.
As seen from Figure 4, fusion method of the present invention is compared with existing four kinds of fusion methods, visual effect is better, word in image is more clear, document " Image sequence fusion using a shift-invariantwavelet transform ", Image Processing, 1997.Proceedings., International Conferenceon.IEEE, 1997, 3:288-291, document " Feature level fusion ofmultimodal medicalimages in lifting wavelet transform domain ", Engineering in Medicine and BiologySociety, 2004.IEMBS'04.26thAnnual International Conference ofthe IEEE.IEEE, 2004, 1:1479-1482 and document " Image fusion based on pixel significance using crossbilateral filter ", Signal, Image andVideo Processing, the fusion method of 2013:1-12. and document " The multiscale directional bilateral filter and its application to multisensor imagefusion.Information Fusion ", Information Fusion, 2012, it is lower that fusion results and the inventive method of the fusion method of 13 (3): 196-206. compare sharpness, quantity of information is less.
By fusion method of the present invention and document " Image sequence fusion using a shift-invariantwavelet transform ", Image Processing, 1997.Proceedings., International Conferenceon.IEEE, 1997, 3:288-291. fusion method, document " Feature level fusion ofmultimodalmedical images in liftingwavelet transform domain ", Engineering in Medicine andBiology Society, 2004.IEMBS'04.26thAnnual International Conference oftheIEEE.IEEE, 2004, 1:1479-1482. fusion method, document " Image fusion based on pixelsignificance using cross bilateral filter ", Signal, Image and Video Processing, the fusion method of 2013:1-12. and document " The multiscale directional bilateral filter and itsapplication to multisensor image fusion.Information Fusion ", Information Fusion, 2012, the fusion method of 13 (3): 196-206. compares in four kinds of image quality evaluation indexs, carry out objective evaluation effect of the present invention.Five kinds of fusion methods fusion objective evaluation index on first group of bottle, second group of pepsi multiple focussing image is as table 1 and table 2:
Table 1. first group of multiple focussing image bottle merges objective evaluation index
Method SD AG Entropy SF
SWT 65.2281 23.0340 7.6856 34.5349
LWT 66.2794 27.7749 7.6824 41.9111
CBF 66.6634 26.9130 7.6794 41.2268
MDBF 68.8676 29.4745 7.6255 45.3892
The present invention 71.3053 37.3358 7.5510 56.2755
Table 2. second group of multiple focussing image pepsi merges objective evaluation index
Method SD AG Entropy SF
SWT 43.2108 8.4721 6.9742 13.2138
LWT 43.2108 9.4786 6.9834 15.2527
CBF 43.6270 9.4341 6.9924 15.5338
MDBF 44.6036 10.1409 7.0392 16.7478
The present invention 45.6118 13.6810 7.2870 22.3411
Table 1 is with table 2:
SWT represents document Rockinger O, " Image sequence fusion using a shift-invariantwavelet transform; " Image Processing, 1997.Proceedings., International Conferenceon.IEEE, the fusion method of 1997,3:288-291..
LWT represents document Kor S, Tiwary U, " Feature level fusion ofmultimodal medicalimages in lifting wavelet transform domain; " Engineering in Medicine and BiologySociety, 2004.IEMBS'04.26thAnnual International Conference ofthe IEEE.IEEE, the fusion method of 2004,1:1479-1482..
CBF represents document Kumar B K S.Image fusion based on pixel significance usingcross bilateral filter.Signal, Image and Video Processing, the image interfusion method of 2013:1-12..
MDBF represents document Hu J, Li S.The multiscale directional bilateral filter and itsapplication to multisensor image fusion.Information Fusion, 2012, the fusion method of 13 (3): 196-206..
SD represents standard deviation, and AG represents average gradient, and Entropy represents information entropy, SF representation space frequency.
From table 1, method of the present invention is obviously better than the method for four sections of above-mentioned documents in three indexs, an other index also relatively other method; And the global index of the inventive method is higher.
From table 2, method of the present invention is obviously better than the method for above-mentioned four sections of documents on four indices.
Above-mentioned experiment proves, the present invention can obtain good visual effect to multi-focus image fusion problem.
The embodiment of the present invention also provides a kind of multi-focus image fusion device based on multiple dimensioned global filtering, and as shown in Figure 5, this device comprises resolving cell 1, integrated unit 2, inverse multiple dimensioned global filtering unit 3,
Described resolving cell 1, for carrying out multi-resolution decomposition to some width multiple focussing images to be fused respectively according to multiple dimensioned global filtering;
Described integrated unit 2, for merging the multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained according to fusion rule;
Described inverse multiple dimensioned global filtering unit 3, obtains final fused images for carrying out inverse multiple dimensioned global filtering to the multiple dimensioned sub-band images after fusion.
Described resolving cell 1, specifically for carrying out multiple dimensioned global filtering decomposition according to the two width multiple focussing images of multiple dimensioned global filtering to input, obtains the low frequency sub-band image of two width multiple focussing images and high-frequency sub-band images wherein, K=4, K represent Decomposition order, and m represents the m layer of decomposition.
Described integrated unit 2, specifically for the low frequency sub-band coefficient to the two width multiple focussing images decomposed through multiple dimensioned global filtering merge, and to high-frequency sub-band images { H m B , m = 1,2 , . . . , K } Merge.
Content of the present invention is not limited to cited by embodiment, and the conversion of those of ordinary skill in the art by reading instructions of the present invention to any equivalence that technical solution of the present invention is taked, is claim of the present invention and contains.

Claims (6)

1. the multi-focus image fusing method based on multiple dimensioned global filtering, it is characterized in that, this fusion method is: carry out multi-resolution decomposition to some width multiple focussing images to be fused respectively according to multiple dimensioned global filtering, merge the multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained according to fusion rule, carry out obtaining final fused images against multiple dimensioned global filtering to the multiple dimensioned sub-band images after merging.
2. the multi-focus image fusing method based on multiple dimensioned global filtering according to claim 1, it is characterized in that, describedly respectively multi-resolution decomposition is carried out to some width multiple focussing images to be fused according to multiple dimensioned global filtering and be: carry out multiple dimensioned global filtering decomposition according to the two width multiple focussing images of multiple dimensioned global filtering to input, obtain the low frequency sub-band image of two width multiple focussing images and high-frequency sub-band images wherein, K=5, K represent Decomposition order, and m represents the m layer of decomposition, and A represents the first width multiple focussing image, and B represents the second width multiple focussing image.
3. the multi-focus image fusing method based on multiple dimensioned global filtering according to claim 1, it is characterized in that, the described multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained that merges according to fusion rule is: to the low frequency sub-band coefficient of the two width multiple focussing images decomposed through multiple dimensioned global filtering merge, and to high-frequency sub-band images merge.
4. based on a multi-focus image fusion device for multiple dimensioned global filtering, it is characterized in that, this device comprises resolving cell, integrated unit, inverse multiple dimensioned global filtering unit,
Described resolving cell, for carrying out multi-resolution decomposition to some width multiple focussing images to be fused respectively according to multiple dimensioned global filtering;
Described integrated unit, for merging the multiple dimensioned sub-band images of decomposing the some width multiple focussing images obtained according to fusion rule;
Described inverse multiple dimensioned global filtering unit, obtains final fused images for carrying out inverse multiple dimensioned global filtering to the multiple dimensioned sub-band images after fusion.
5. the multi-focus image fusion device based on multiple dimensioned global filtering according to claim 4, it is characterized in that: described resolving cell, specifically for carrying out multiple dimensioned global filtering decomposition according to the two width multiple focussing images of multiple dimensioned global filtering to input, obtain the low frequency sub-band image of two width multiple focussing images and high-frequency sub-band images wherein, K=5, K represent Decomposition order, and m represents the m layer of decomposition, and A represents the first width multiple focussing image, and B represents the second width multiple focussing image.
6. the multi-focus image fusion device based on multiple dimensioned global filtering according to claim 4, is characterized in that: described integrated unit, specifically for the low frequency sub-band coefficient to the two width multiple focussing images decomposed through multiple dimensioned global filtering merge, and to high-frequency sub-band images merge.
CN201410763858.3A 2014-12-11 2014-12-11 Multi-focus image fusing method and its device based on multiple dimensioned global filtering Active CN104463822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410763858.3A CN104463822B (en) 2014-12-11 2014-12-11 Multi-focus image fusing method and its device based on multiple dimensioned global filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410763858.3A CN104463822B (en) 2014-12-11 2014-12-11 Multi-focus image fusing method and its device based on multiple dimensioned global filtering

Publications (2)

Publication Number Publication Date
CN104463822A true CN104463822A (en) 2015-03-25
CN104463822B CN104463822B (en) 2017-08-25

Family

ID=52909809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410763858.3A Active CN104463822B (en) 2014-12-11 2014-12-11 Multi-focus image fusing method and its device based on multiple dimensioned global filtering

Country Status (1)

Country Link
CN (1) CN104463822B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430266A (en) * 2015-11-30 2016-03-23 努比亚技术有限公司 Image processing method based on multi-scale transform and terminal
CN109583282A (en) * 2017-09-29 2019-04-05 高德软件有限公司 A kind of vector road determines method and device
CN109671044A (en) * 2018-12-04 2019-04-23 重庆邮电大学 A kind of more exposure image fusion methods decomposed based on variable image
CN111489319A (en) * 2020-04-17 2020-08-04 电子科技大学 Infrared image enhancement method based on multi-scale bilateral filtering and visual saliency
CN115311691A (en) * 2022-10-12 2022-11-08 山东圣点世纪科技有限公司 Joint identification method based on wrist vein and wrist texture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN103632354A (en) * 2012-08-24 2014-03-12 西安元朔科技有限公司 Multi focus image fusion method based on NSCT scale product
CN103955910A (en) * 2014-05-13 2014-07-30 武汉科技大学 Multi-focus image fusing method based on measurement bilateral image gradient sharp degree
US20140341481A1 (en) * 2013-03-15 2014-11-20 Karen A. Panetta Methods and Apparatus for Image Processing and Analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN103632354A (en) * 2012-08-24 2014-03-12 西安元朔科技有限公司 Multi focus image fusion method based on NSCT scale product
US20140341481A1 (en) * 2013-03-15 2014-11-20 Karen A. Panetta Methods and Apparatus for Image Processing and Analysis
CN103955910A (en) * 2014-05-13 2014-07-30 武汉科技大学 Multi-focus image fusing method based on measurement bilateral image gradient sharp degree

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵鹏, 倪国强: "基于多尺度柔性形态学滤波器的图像融合", 《光电子激光》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430266A (en) * 2015-11-30 2016-03-23 努比亚技术有限公司 Image processing method based on multi-scale transform and terminal
CN109583282A (en) * 2017-09-29 2019-04-05 高德软件有限公司 A kind of vector road determines method and device
CN109583282B (en) * 2017-09-29 2021-04-09 阿里巴巴(中国)有限公司 Vector road determining method and device
CN109671044A (en) * 2018-12-04 2019-04-23 重庆邮电大学 A kind of more exposure image fusion methods decomposed based on variable image
CN109671044B (en) * 2018-12-04 2019-10-08 重庆邮电大学 A kind of more exposure image fusion methods decomposed based on variable image
CN111489319A (en) * 2020-04-17 2020-08-04 电子科技大学 Infrared image enhancement method based on multi-scale bilateral filtering and visual saliency
CN115311691A (en) * 2022-10-12 2022-11-08 山东圣点世纪科技有限公司 Joint identification method based on wrist vein and wrist texture
CN115311691B (en) * 2022-10-12 2023-02-28 山东圣点世纪科技有限公司 Joint identification method based on wrist vein and wrist texture

Also Published As

Publication number Publication date
CN104463822B (en) 2017-08-25

Similar Documents

Publication Publication Date Title
CN109446992B (en) Remote sensing image building extraction method and system based on deep learning, storage medium and electronic equipment
Li et al. Survey of single image super‐resolution reconstruction
CN104268847A (en) Infrared light image and visible light image fusion method based on interactive non-local average filtering
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
CN104463822A (en) Multi-focus image fusing method and device based on multi-scale overall filtering
CN104200452A (en) Method and device for fusing infrared and visible light images based on spectral wavelet transformation
CN101556690A (en) Image super-resolution method based on overcomplete dictionary learning and sparse representation
CN105046672A (en) Method for image super-resolution reconstruction
CN103455991A (en) Multi-focus image fusion method
CN103279935A (en) Method and system of thermal infrared remote sensing image super-resolution reconstruction based on MAP algorithm
CN102136144A (en) Image registration reliability model and reconstruction method of super-resolution image
Chen et al. A new process for the segmentation of high resolution remote sensing imagery
Liu et al. A deep residual learning serial segmentation network for extracting buildings from remote sensing imagery
CN104657951A (en) Multiplicative noise removal method for image
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
CN113313180A (en) Remote sensing image semantic segmentation method based on deep confrontation learning
Zou et al. Joint wavelet sub-bands guided network for single image super-resolution
Wang et al. PACCDU: Pyramid attention cross-convolutional dual UNet for infrared and visible image fusion
CN103020905A (en) Sparse-constraint-adaptive NLM (non-local mean) super-resolution reconstruction method aiming at character image
CN108334851B (en) Rapid polarization SAR image segmentation method based on anisotropic property
CN106971402B (en) SAR image change detection method based on optical assistance
CN103065296B (en) High-resolution remote sensing image residential area extraction method based on edge feature
Wang et al. Reconstruction of sub‐mm 3D pavement images using recursive generative adversarial network for faster texture measurement
CN117115563A (en) Remote sensing land coverage classification method and system based on regional semantic perception
CN117292126A (en) Building elevation analysis method and system using repeated texture constraint and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant