CN106327459A - Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) - Google Patents
Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) Download PDFInfo
- Publication number
- CN106327459A CN106327459A CN201610803598.7A CN201610803598A CN106327459A CN 106327459 A CN106327459 A CN 106327459A CN 201610803598 A CN201610803598 A CN 201610803598A CN 106327459 A CN106327459 A CN 106327459A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- udct
- block
- coefficient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 93
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 23
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 12
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 14
- 230000009466 transformation Effects 0.000 claims description 13
- 238000011524 similarity measure Methods 0.000 claims description 12
- 230000005855 radiation Effects 0.000 claims description 10
- 238000002156 mixing Methods 0.000 claims description 7
- 238000000354 decomposition reaction Methods 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 2
- 238000007499 fusion processing Methods 0.000 abstract 1
- 238000000034 method Methods 0.000 description 63
- 238000007500 overflow downdraw method Methods 0.000 description 22
- 230000000694 effects Effects 0.000 description 14
- 210000002569 neuron Anatomy 0.000 description 8
- 238000005295 random walk Methods 0.000 description 8
- 238000002474 experimental method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 235000007926 Craterellus fallax Nutrition 0.000 description 4
- 240000007175 Datura inoxia Species 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 238000010304 firing Methods 0.000 description 4
- 230000010354 integration Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000001737 promoting effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000009931 harmful effect Effects 0.000 description 1
- 210000002364 input neuron Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and a PCNN (Pulse Coupled Neural Network), which relates to the technical field of image processing and solves the technical problems that the fused image is not clear and details are unobvious as the similarity of low-frequency information of a to-be-fused image can not be judged in the prior art, and the detail richness degree of a source image can not be judged. The algorithm of the invention mainly comprises steps: (1) after source images of the visible light image and the infrared image are subjected to UDCT decomposition, UDCT subband coefficients with different scales in different directions can be obtained, and the UDCT subband coefficients comprise low-frequency and high-frequency UDCT coefficients; and (2) according to a specific rule, a different mode is adopted for each scale layer for fusion processing, a low-frequency coefficient fusion rule is adopted for the low-frequency coefficients, a high-frequency coefficient fusion rule is adopted for the high-frequency coefficients, and finally, the UDCT coefficient for each layer after fusion can be acquired; and a reconstructed image obtained after inverse transforma on the UDCT coefficient for each layer after fusion is the fused image.
Description
Technical field
The present invention relates to technical field of image processing, be specifically related to visible ray based on UDCT and PCNN and melt with infrared image
Hop algorithm.
Background technology
Currently, utilizing the visible ray integration technology with infrared image is infrared image enhancement neck to promote infrared image quality
The study hotspot in territory, is then the mainstream research direction in this focus based on multiple dimensioned integration technology.Based on multiple dimensioned
Integration technology principle is as follows: the first step, by multi-scale transform, source figure is decomposed into a series of subimage, these subimage chis
Degree difference, frequency and spatial character are different.Second step, makees to merge meter to the conversion coefficient of source images according to specific fusion rule
Calculate.3rd step, will merge after conversion coefficient merged by the inverse transformation of multi-resolution decomposition after high quality graphic.Although
Traditional image fusion technology based on multi-resolution decomposition, the integration technology of such as wavelet decomposition can promote to a certain extent
Picture quality, but but there is structure complicated, the prominent shortcoming that data redudancy is high.
In order to solve the problems referred to above, the present invention uses based on uniform discrete warp wavelet (Uniform Discrete
Curvelet Transform, UDCT) decompose source figure.Uniform discrete warp wavelet introducing Image Fusion is had a lot
Advantage.First, original image carries out multi-direction, multiple dimensioned decomposition under UDCT assists, and caters to human world's perception information
Mode, can obtain the minutia in original image in all directions, and then supply more efficiently reference for image co-registration
Information.Secondly, the translation invariance of UDCT can effectively suppress the harmful effect produced fusion results due to registration accuracy difference.This
Outward, UDCT coefficient redundancy is low and realizes simple, and the different coefficient of yardstick possesses spatial coherence, so at fusion rule
Selection on can carry out fusion coefficients in mode the most flexibly.
In addition the core link during fusion rule is image co-registration, the quality of fusion rule is by shadow to a great extent
Ring the quality of image co-registration.Due to recognize Pulse Coupled Neural Network (Pulse Coupled Neural Network,
PCNN) there is pulse granting, capture and the feature of variable threshold value, be consistent with human visual system, so the present invention proposes one
Plant and combine uniform discrete warp wavelet and the visible ray of Pulse Coupled Neural Network and infrared image fusion method.
Summary of the invention
For above-mentioned prior art, present invention aim at providing visible ray based on UDCT and PCNN to melt with infrared image
Hop algorithm, solves prior art owing to can not judging the similarity of image low-frequency information to be fused, can not judging source images
Details enrich degree and the technical problem such as the fusion image that causes is unintelligible, details is inconspicuous.
For reaching above-mentioned purpose, the technical solution used in the present invention is as follows:
Visible ray based on UDCT and PCNN and infrared image blending algorithm, comprise the steps,
Step 1, acquisition visible light source image and infrared radiation source image, be respectively divided visible light source image and infrared radiation source image be
Visible images block and infrared image block;
Step 2, visible light source image and infrared radiation source image carrying out uniform discrete warp wavelet, every width source images obtains respectively
Obtain one group of different scale and the low frequency sub-band coefficient of different directions and high-frequency sub-band coefficient;
Step 3, calculating visible images block and the difference of Gaussian eigenvalue of infrared image block, further according to difference of Gaussian feature
Value, is selected high frequency fusion rule and is merged high-frequency sub-band coefficient by Pulse Coupled Neural Network, it is thus achieved that high frequency merges
Coefficient;
Step 4, calculating visible images block and the energy similarity measure eigenvalue of infrared image block, further according to energy similarity measure
Eigenvalue and difference of Gaussian eigenvalue, select low frequency fusion rule and merge low frequency sub-band coefficient, it is thus achieved that low frequency merges
Coefficient;
Step 5, according to low frequency fusion coefficients and high frequency fusion coefficients, it is thus achieved that fusion coefficients, further according to fusion coefficients, utilize
Multi-scale transform inverse transformation carries out image reconstruction, it is thus achieved that fusion image.
In said method, described step 3, comprise the steps,
Step 3.1, calculating visible images block and the difference of Gaussian eigenvalue F of infrared image blockDoG,
Wherein, Represent that parameter is σ respectively1And σ2Two Gaussian functions
Number, I is visible images block or infrared image block;
Step 3.2, according to difference of Gaussian eigenvalue, using equation below as high frequency fusion rule,
Wherein, with two-dimensional image coordinate (i, j) centered by, DA(i,j)、DB(i j) is respectively visible images block and infrared
The difference of Gaussian eigenvalue of image block, X takes F, A or B,Corresponding X value is respectively high frequency fusion coefficients, visible light source figure
High-frequency sub-band coefficient or the high-frequency sub-band coefficient of infrared radiation source image B as A;
Step 3.3, by Pulse Coupled Neural Network, high-frequency sub-band coefficient is merged, it is thus achieved that high frequency fusion coefficients.
In said method, described step 4, comprise the steps,
Step 4.1, by equation below, calculate visible images block and the energy similarity measure eigenvalue S of infrared image block
(i,j)
Wherein, FA(i,j),FB(i, j) be respectively visible images block and infrared image block, with two-dimensional image coordinate (i, j)
Centered by, min{E (FA(i,j)),E(FB(i, j)) } it is visible images block and the minimum value function of infrared image block energy,
max{E(FA(i,j)),E(FB(i, j)) } it is visible images block and the max function of infrared image block energy;
Step 4.2, default similarity threshold, it is judged that (whether i, j) more than similarity threshold for energy similarity measure eigenvalue S;
If (i, j) more than similarity threshold, chooses average rule and merges low frequency step 4.2.1 energy similarity measure eigenvalue S
Sub-band coefficients;
If (i, j) less than or equal to similarity threshold, chooses energy the most more to step 4.2.2 energy similarity measure eigenvalue S
Big visible images block or infrared image block, and it is composed the biggest weighting;
Step 4.3, combine difference of Gaussian eigenvalue, by updating equation below
For fusion formula
And threshold value T is set, thus obtain low frequency fusion coefficientsWherein, α1,α2For weight coefficient, X takes F, A or B,
With two-dimensional image coordinate (i, j) centered by,Corresponding X value be respectively low frequency fusion coefficients, visible light source image A low
Frequently sub-band coefficients or the low frequency sub-band coefficient of infrared radiation source image B.
Compared with prior art, beneficial effects of the present invention:
The similarity of fusion method coefficient of utilization block analysis low-frequency information of the present invention, if information is similar, then uses simple
Averagely, the most just use the Differential Characteristics of Gaussian function to select the details relatively horn of plenty of which source images, and be selected as
Fusion image;
In the fusion rule of high-frequency signal, the inventive method has abandoned traditional direct take high-frequency band pass sub-band coefficients
The method of absolute value;By introducing Pulse Coupled Neural Network, neighborhood characteristic in Fire mapping image is more significant, i.e. neighborhood is equal
The sub-band coefficients of the bigger pixel of value leads to directional subband coefficient as the band of fusion image respective pixel;By drawing in high-frequency signal
Entering Pulse Coupled Neural Network, this fusion method can make full use of the characteristic information in infrared image and visible images;We
Method can preferably catch visible images and the main body of infrared image and detailed information, and the vision promoting fusion image represents power;
Fusion image indoor setting thing stereovision is strong, and noise is less, substantially eliminates gibbs blocky effect, has preferably
Visual effect;Relative to method based on broad sense random walk, the present invention retains the part edge information of image, makes contrast have
Improved.
Accompanying drawing explanation
Fig. 1 is the image interfusion method block diagram of the present invention;
Fig. 2 is the infrared and visible images group figure used, and (a) (b) is infrared image, and it is right that (c) (d) is respectively (a) (b)
The visible ray figure answered;
Fig. 3 be various method to STOWAGE PLAN as the comparable group figure of fusion results, (a) infrared image (b) visible images (c) base
Result (d) result (e) based on Contourlet method fusion method (f) based on UDCT in wavelet method based on broad sense with
The result of fusion method (g) the inventive method of machine migration;
Fig. 4 is that STOWAGE PLAN is as the partial enlarged drawing of the fusion results rectangle tab area of visible ray and infrared image pair, figure
A () (g) is the partial enlarged drawing of rectangular area in Fig. 3 (a) (g);
Fig. 5 is the various method comparable group figure to woods image co-registration result, (a) infrared image (b) visible images (c)
Result (d) based on wavelet method result (e) based on Contourlet method fusion method (f) based on UDCT is based on broad sense
The result of result (g) the inventive method of random walk.
Detailed description of the invention
All features disclosed in this specification, or disclosed all methods or during step, except mutually exclusive
Feature and/or step beyond, all can combine by any way.
The present invention will be further described below in conjunction with the accompanying drawings:
Embodiment 1
In view of the premium properties of UDCT conversion, it is applied to herein in the infrared fusion method with visible images.This
Algorithm can be summarized as following three steps:
(1) after the source images of visible images and infrared image is carried out respectively UDCT conversion decomposition, available different chis
The UDCT sub-band coefficients of degree different directions.It comprises the UDCT coefficient of low frequency and high frequency.
(2) according to specific rule, each scale layer is carried out fusion treatment in different ways, i.e. low frequency coefficient is adopted
Use low frequency coefficient fusion rule, and use high frequency coefficient fusion rule to process high frequency coefficient, finally can obtain each layer after fusion
UDCT coefficient.
(3) the reconstruct image that each layer UDCT coefficient after inverse transformation is merged obtains is fusion image.
The fusion rule that algorithm is used
Selecting fusion rule directly to determine the quality of image co-registration, therefore, fusion rule is in the process of image co-registration
Among extremely important, it is also the focus in image studies simultaneously.Under normal circumstances, have different from HFS in view of low frequency
Characteristic, therefore to these two parts use different fusion rules process.
Low frequency fusion rule
Low frequency signal in image mainly main information and image detail information that some are less by image forms.For
In fusion image, embody infrared image and the main information of visible images, traditional method be application weighted average or
Low frequency part is merged by the method for person's simple average:
Wherein, α1,α2For weight coefficient, work as α1=α2It it is simple average when=0.5.
But if the low-frequency information between source images differs greatly, the fusion results obtained by simple average will less
Ideal, fuzzyyer features.In order to overcome this problem, first this algorithm analyzes the low-frequency information judging source images: if
Source images low-frequency information to be fused is the most similar, then use simple average mode;Consider that image co-registration is intended to obtain the most clear
The fusion image that clear and details is the most various.So, if source images low-frequency information to be fused is not much like, just choose details
Information is fusion image compared with the source images of horn of plenty.
For realizing above-mentioned target, be necessary for solve following two problem:
1) how to judge that the low-frequency information merging source images is dissimilar?
2) how to judge that the details of which source images is enriched?
For first problem, the most how to judge that the low-frequency information merging source images is dissimilar, following methods solution can be used
Certainly.
The information that the information comprised due to image (coefficient) block represents far more than pixel (coefficient), and be less susceptible to make an uproar
The impact of sound, so the similarity of coefficient of utilization block analysis low-frequency information.For the difference between quick design factor block, can be straight
Connecing the energy of design factor block, then the difference between analysing energy calculates similarity.This algorithm uses the following degree of approximation (also
Be referred to as similarity) function S (.) judge merge source images low-frequency information the most similar:
Wherein, FA(i,j),FB(i, j) be respectively source images A, in B with (i, j) centered by a coefficient block, coefficient block
Be dimensioned to 3 × 3;E (.) is the energy of design factor block.S (i, span j) can be substantially analyzed from above formula
For [0,1].(i, value j) is the highest, and the low-frequency information of source images A, B is the most similar for S;(i, value j) is the lowest, and source images A, B's is low for S
Frequently information gap is the biggest.If the low-frequency information of source images A, B is similar, i.e. S (.) is more than threshold value T set in advance, can adopt
Merge by average rule;If the low-frequency information similarity of source images is low, i.e. S (.) is less than a threshold value set in advance
T, will choose and have the coefficient that energy is bigger.This is because this coefficient comprises more details, and fusion image expectation comprises more
Details.
For Second Problem, how to judge the details more horn of plenty of which source images, the difference of Gaussian function can be used
Details is measured by feature (Difference of Gaussian, DOG), and DoG characteristic energy is the biggest, illustrates that details is the richest
Richness, more should be chosen as fusion image.DOG is the difference of Gaussian function, it be two different parameters gaussian filtering after
Image subtraction.
Gaussian filter function is expressed as:
The gaussian filtering of two different parameters of image I is:
DoG character representation is
Wherein,Represent that parameter is σ respectively1And σ2Two Gaussian functions, I is pending image;g2,g1Respectively
ForTo the image after pending image gaussian filtering.FDoGFor the DoG feature obtained.
The information represented far more than pixel (coefficient) due to the information of image (coefficient) block representative.Therefore, when calculating energy
Calculate in units of coefficient block.Multiple image block will be divided into by source images, then calculate the DoG feature of image block, choose spy
The coefficient of the image levying the image block of maximum is the coefficient after merging.Region dissimilar to source images low-frequency information, merges rule
Then it is expressed as:
Wherein, DA(i is j) so that (i, j) center carry out the energy of the filtered image block of DoG to image.
In sum, the coefficient fusion rule of low frequency is embodied as:
Wherein, T is a threshold value set in advance.D is for carry out the filtered image block of DoG to image, so that (i, in j) being
The energy of the heart.
High frequency fusion rule
Fusion rule for high fdrequency component, it is common practice to the big value that takes absolute value high-frequency band pass sub-band coefficients is carried out
Fusion treatment, because in general, coefficient is the biggest, and to represent high-frequency information the abundantest:
The algorithm of the present invention obtains UDCT conversion coefficient, then as PCNN by source images carries out UDCT conversion
The input of neuron, inputs into PCNN neutral net.I.e. UDCT conversion coefficient and PCNN input neuron exist therebetween by
One corresponding relation.Meanwhile, each neuron is linked to each other with neighbouring several neurons in link field, there are two kinds of states
Neuron exports, namely igniting and non-ignition.Thus, the igniting corresponding with the summation of a neuron firing number of times distribution
Matrix is generated as.
The each sub-band coefficients obtained by UDCT inputs PCNN respectively, obtains each self-corresponding point via the iteration of same number of times
Fire mapping graph.Compare the ignition times at Fire mapping image respective pixel again, so that it may whether the target judging this position is high
Frequently details area.Thus when image co-registration, the UDCT conversion coefficient of source images is inputted PCNN, by each neuron firing
The Fire mapping images that number generates just can efficiently extract in conversion coefficient figure the high-frequency informations such as the edge of correspondence, texture.Further,
The neuron firing number of times that a certain conversion coefficient point is corresponding is the most, then in explanatory diagram picture, the information of this point is the abundantest.
The band of fusion image respective pixel leads to directional subband coefficient, and can directly to take neighborhood characteristic in Fire mapping image more significant,
The i.e. sub-band coefficients of the bigger pixel of neighboring mean value.
If the transform coefficient bits of source images A, B is set to, (i, j) corresponding PCNN neuron firing number of times is expressed as PA
(i, j), PB(i, j), high frequency fusion rule is:
Based on UDCT conversion and the blending algorithm flow process of PCNN
Image co-registration schematic diagram based on UDCT and PCNN is as shown in Figure 1.The basic step merged is:
Converted by UDCT, decompose source images A and B, it is thus achieved that the UDCT sub-band coefficients that yardstick is different and direction is different.It
UDCT coefficient comprises thick yardstick and top thin yardstick.Except top, other each thin scale layer all forms in different directions
UDCT coefficient;Use different fusion codes that each scale layer is carried out fusion treatment.
Embodiment 2
For verifying the effectiveness of algorithm, experiment uses the infrared and visible ray figure of the two groups of same scene registrated
Picture.Experimental image is as shown in Figure 2.The method that the present invention proposes becomes with method based on wavelet transformation with based on un-downsampling wavelet transform
The method changed, method based on contourlet transformation, method based on UDCT conversion and side based on broad sense random walk
Method compares.In experiment, what wavelet transformation and undecimated wavelet transform used is all 3 layers of decomposition, and uses Harr small echo
Basic function, uses greatest coefficient rule to merge high frequency coefficient, uses average rule to merge low frequency coefficient.
The different fusion method of application is to the experimental result of STOWAGE PLAN picture in Fig. 2 as shown in Figure 3.Fig. 3 (a) and Fig. 3 (b) is respectively
The infrared image of ship and visible images;Fig. 3 (c) is result based on wavelet method;Fig. 3 (d) is based on Contourlet side
The result of method;Fig. 3 (e) is fusion results based on UDCT;Fig. 3 (f) is result based on broad sense random walk method;Fig. 3 (g)
The result of method is proposed for the present invention.From subjective vision effect, various blending algorithms all can be protected to varying degrees
Stay infrared image and the main information in visible images.
The various method of Fig. 3 to STOWAGE PLAN as the comparison of fusion results
By relatively each fusion image it is found that the scenery of fusion results based on small echo lacks stereovision, and make an uproar
Sound is relatively big, there is obvious gibbs blocky effect;Fusion results based on contourlet transformation and fusion based on small echo
Result effect is similar to.Examine and can find that fusion results based on contourlet transformation is slightly better than fusion based on small echo knot
Really, this is owing to contourlet transformation has ratio wavelet transformation better performance.The result of fusion method based on UDCT is excellent
In the first two result, this is owing to UDCT conversion is better than wavelet transformation and contourlet transformation.And based on broad sense random walk
Method several method earlier above more novel, its fused image is the most smooth, and whole structure is better than front several method.Can see
Go out the syncretizing effect stereovision of method that the present invention proposes higher, there is more preferable visual effect, be substantially better than on the whole based on
The fusion method of small echo, fusion method based on Contourlet and fusion method based on UDCT.And swim at random based on broad sense
Although the result of the method walked seems more smooth than the result of context of methods, but smear the part edge information of the image that goes out,
Its contrast is made to decrease.
The comparison to woods image co-registration result of Fig. 5 various method
In order to observe the visual effect of inventive algorithm further, rectangle tab area in Fig. 3 is amplified, amplifies
After image as shown in Figure 4.It can be seen that be clearly present some mosaics based on small echo and method based on Contourlet,
And gibbs blocky effect is the most serious.Fusion method based on UDCT can overcome certain blocky effect, but exists
Certain noise.And method entirety of based on broad sense random walk is the most smooth, lost part marginal information.And calculation herein
Method can preferably overcome these problems, has relatively sharp visual effect.This is mainly the most not only with of good performance
UDCT converts, and uses PCNN to carry out the fusion of high frequency coefficient, and the syncretizing effect therefore obtained is more satisfactory.
Fig. 5 is the comparison to woods image co-registration result of the various method, examines and can draw the conclusion identical with Fig. 3.
Algorithm obtains best visual effect compared to other method i.e. herein.
The objective evaluation Indexes Comparison of table 1 algorithm overall performance test
In order to analyze experimental result objectively, it is employed herein QSSIM, mutual information and tri-objective appraisals of QAB/F and refers to
Number analyzes fusion results.Mutual information is the reflection to the quantity of information that fusion image obtains from source images, and value shows the most greatly to melt
The effect closed is the best;QAB/F is that the edge to fusion image enriches degree and reflects, its span is 0 to 1, and value is more
Show that greatly the effect merged is the best.QSSIM is picture structure phase knowledge and magnanimity index, represents the relatedness of neighbor, its value
Scope is-1 to 1, and value is the biggest represents that the signal merged more is known each other.As shown in table 1 to the objective evaluation of distinct methods fusion results.
Although from table 1 it follows that in QSSIM index the method for the present invention with compared with method be respectively arranged with quality,
But from other two indexs: fusion method based on small echo differs with fusion method performance based on Contourlet
Not quite, performance is the most relatively low comparatively speaking.Method performance based on UDCT be substantially better than fusion method based on small echo with based on
The fusion method of Contourlet.Relatively new method based on broad sense random walk is again much better than front several method.And this
Though inventive method is slightly poorer than method based on broad sense random walk in this index of mutual information, but in edge richness
Being substantially better than it, this is consistent with subjective vision effect.
The MI merging contrast experiment is compared by the extra five groups of images of table 2
The QAB/F merging contrast experiment is compared by the extra five groups of images of table 3
For analyzing the performance of inventive algorithm more objectively, table 2,4-3 and 4-4 illustrate other five groups of registered imageses pair
Contrast and experiment.Data from these tables also confirm above-mentioned experimental analysis, although the method that i.e. present invention is carried is mutually
Have both advantages and disadvantages with the method contrasted in information and structural similarity, but but there is preferable edge conservation degree, thus merge
After image object feature more clear, subjective vision effect is preferable.But inventive algorithm is not examined in fusion rule part
Consider image-region importance, thus there is also the probability promoting syncretizing effect further.
The QSSIM merging contrast experiment is compared by the extra five groups of images of table 4
The present invention proposes a kind of new infrared image enhancing method, i.e. based on uniform discrete warp wavelet and pulse-couple god
Visible ray and infrared image fusion method through network.Being unique in that of this method:
Make full use of uniform discrete warp wavelet characteristic multiple dimensioned, multidirectional to decompose source images, more efficiently catch
Having received the minutia information of source images, more preferable basis has been laid in the fusion for next step.
Low frequency signal is used simple average to obtain fusion results by traditional fusion method, although its method is easy to use,
But effect is the most not satisfactory, easily produce the situation that main body is fuzzy.For overcoming this problem, the fusion method of present invention research
The similarity of coefficient of utilization block analysis low-frequency information, if information is similar, then uses simple average;The most just Gaussian function is used
Differential Characteristics (DOG) select the details relatively horn of plenty of which source images, and be selected as fusion image.At high-frequency signal
Fusion rule in, the inventive method has abandoned traditional direct method taking absolute value high-frequency band pass sub-band coefficients.Pass through
Introduce Pulse Coupled Neural Network, neighborhood characteristic in Fire mapping image is more significant, i.e. the subband of the bigger pixel of neighboring mean value
Coefficient leads to directional subband coefficient as the band of fusion image respective pixel.By introducing pulse coupled neural net in high-frequency signal
Network, this fusion method can make full use of the characteristic information in infrared image and visible images.
From the point of view of experimental result, the method that the present invention proposes can preferably catch the main body of visible images and infrared image
And detailed information, the vision promoting fusion image represents power;, analyzing from the objective evaluation data of experiment, the present invention carries meanwhile
The fusion method gone out is also superior to traditional method.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
Belong to those skilled in the art in the technical scope that the invention discloses, the change that can readily occur in or replacement, all answer
Contain within protection scope of the present invention.
Claims (3)
1. visible ray based on UDCT and PCNN and infrared image blending algorithm, it is characterised in that comprise the steps,
Step 1, acquisition visible light source image and infrared radiation source image, be respectively divided visible light source image and infrared radiation source image be visible
Light image block and infrared image block;
Step 2, visible light source image and infrared radiation source image carrying out uniform discrete warp wavelet, every width source images obtains one respectively
Organize different scale and the low frequency sub-band coefficient of different directions and high-frequency sub-band coefficient;
Step 3, calculating visible images block and the difference of Gaussian eigenvalue of infrared image block, further according to difference of Gaussian eigenvalue,
Select high frequency fusion rule and by Pulse Coupled Neural Network, high-frequency sub-band coefficient merged, it is thus achieved that high frequency merges system
Number;
Step 4, calculating visible images block and the energy similarity measure eigenvalue of infrared image block, further according to energy similarity measure feature
Value and difference of Gaussian eigenvalue, select low frequency fusion rule and merge low frequency sub-band coefficient, it is thus achieved that low frequency fusion coefficients;
Step 5, according to low frequency fusion coefficients and high frequency fusion coefficients, it is thus achieved that fusion coefficients, further according to fusion coefficients, utilize many chis
Degree conversion inverse transformation carries out image reconstruction, it is thus achieved that fusion image.
Visible ray based on UDCT and PCNN the most according to claim 1 and infrared image blending algorithm, it is characterised in that
Described step 3, comprises the steps,
Step 3.1, calculating visible images block and the difference of Gaussian eigenvalue F of infrared image blockDoG,
Wherein, Represent that parameter is σ respectively1And σ2Two Gaussian functions, I is
Visible images block or infrared image block;
Step 3.2, according to difference of Gaussian eigenvalue, using equation below as high frequency fusion rule,
Wherein, with two-dimensional image coordinate (i, j) centered by, DA(i,j)、DB(i j) is respectively visible images block and infrared image
The difference of Gaussian eigenvalue of block, X takes F, A or B,Corresponding X value is respectively high frequency fusion coefficients, visible light source image A
High-frequency sub-band coefficient or the high-frequency sub-band coefficient of infrared radiation source image B;
Step 3.3, by Pulse Coupled Neural Network, high-frequency sub-band coefficient is merged, it is thus achieved that high frequency fusion coefficients.
Visible ray based on UDCT and PCNN the most according to claim 2 and infrared image blending algorithm, it is characterised in that
Described step 4, comprises the steps,
Step 4.1, by equation below, calculate visible images block and infrared image block energy similarity measure eigenvalue S (i, j)
Wherein, FA(i,j),FB(i, j) is respectively visible images block and infrared image block, and with two-dimensional image coordinate, (i, in j) being
The heart, min{E (FA(i,j)),E(FB(i, j)) } it is visible images block and the minimum value function of infrared image block energy, max{E
(FA(i,j)),E(FB(i, j)) } it is visible images block and the max function of infrared image block energy;
Step 4.2, default similarity threshold, it is judged that (whether i, j) more than similarity threshold for energy similarity measure eigenvalue S;
If (i, j) more than similarity threshold, chooses average rule and merges low frequency sub-band step 4.2.1 energy similarity measure eigenvalue S
Coefficient;
If (i, j) less than or equal to similarity threshold, chooses energy the biggest to step 4.2.2 energy similarity measure eigenvalue S
Visible images block or infrared image block, and it is composed the biggest weighting;
Step 4.3, combine difference of Gaussian eigenvalue, by updating equation below
For fusion formula
And threshold value T is set, thus obtain low frequency fusion coefficientsWherein, α1,α2For weight coefficient, X takes F, A or B, to scheme
As two-dimensional coordinate (i, j) centered by,Corresponding X value is respectively low frequency of low frequency fusion coefficients, visible light source image A
With coefficient or the low frequency sub-band coefficient of infrared radiation source image B.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610803598.7A CN106327459B (en) | 2016-09-06 | 2016-09-06 | Visible light and infrared image fusion method based on UDCT and PCNN |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610803598.7A CN106327459B (en) | 2016-09-06 | 2016-09-06 | Visible light and infrared image fusion method based on UDCT and PCNN |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106327459A true CN106327459A (en) | 2017-01-11 |
CN106327459B CN106327459B (en) | 2019-03-12 |
Family
ID=57788074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610803598.7A Active CN106327459B (en) | 2016-09-06 | 2016-09-06 | Visible light and infrared image fusion method based on UDCT and PCNN |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106327459B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106981059A (en) * | 2017-03-30 | 2017-07-25 | 中国矿业大学 | With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing |
CN107194904A (en) * | 2017-05-09 | 2017-09-22 | 西北工业大学 | NSCT area image fusion methods based on supplement mechanism and PCNN |
CN107451984A (en) * | 2017-07-27 | 2017-12-08 | 桂林电子科技大学 | A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis |
CN107886488A (en) * | 2017-12-04 | 2018-04-06 | 国网山东省电力公司电力科学研究院 | Based on AUV image interfusion methods, processor and the system for improving PCNN compensation |
CN108681722A (en) * | 2018-05-24 | 2018-10-19 | 辽宁工程技术大学 | A kind of finger vein features matching process based on texture |
CN108717689A (en) * | 2018-05-16 | 2018-10-30 | 北京理工大学 | Middle LONG WAVE INFRARED image interfusion method and device applied to naval vessel detection field under sky and ocean background |
CN109064436A (en) * | 2018-07-10 | 2018-12-21 | 西安天盈光电科技有限公司 | Image interfusion method |
CN109242812A (en) * | 2018-09-11 | 2019-01-18 | 中国科学院长春光学精密机械与物理研究所 | Image interfusion method and device based on conspicuousness detection and singular value decomposition |
CN110298807A (en) * | 2019-07-05 | 2019-10-01 | 福州大学 | Based on the domain the NSCT infrared image enhancing method for improving Retinex and quantum flora algorithm |
CN110874581A (en) * | 2019-11-18 | 2020-03-10 | 长春理工大学 | Image fusion method for bioreactor of cell factory |
CN111429391A (en) * | 2020-03-23 | 2020-07-17 | 西安科技大学 | Infrared and visible light image fusion method, fusion system and application |
CN111815550A (en) * | 2020-07-04 | 2020-10-23 | 淮阴师范学院 | Infrared and visible light image fusion method based on gray level co-occurrence matrix |
CN115578304A (en) * | 2022-12-12 | 2023-01-06 | 四川大学 | Multi-band image fusion method and system combining saliency region detection |
CN116188975A (en) * | 2023-01-03 | 2023-05-30 | 国网江西省电力有限公司电力科学研究院 | Power equipment fault identification method and system based on air-ground visual angle fusion |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020131648A1 (en) * | 2001-03-13 | 2002-09-19 | Tadao Hayashide | Image processing apparatus and image processing method |
CN101697231A (en) * | 2009-10-29 | 2010-04-21 | 西北工业大学 | Wavelet transformation and multi-channel PCNN-based hyperspectral image fusion method |
CN104077762A (en) * | 2014-06-26 | 2014-10-01 | 桂林电子科技大学 | Multi-focusing-image fusion method based on NSST and focusing area detecting |
CN104463821A (en) * | 2014-11-28 | 2015-03-25 | 中国航空无线电电子研究所 | Method for fusing infrared image and visible light image |
CN104616261A (en) * | 2015-02-09 | 2015-05-13 | 内蒙古科技大学 | Method for fusing Shearlet domain multi-spectral and full-color images based on spectral characteristics |
CN104978724A (en) * | 2015-04-02 | 2015-10-14 | 中国人民解放军63655部队 | Infrared polarization fusion method based on multi-scale transformation and pulse coupled neural network |
CN105388414A (en) * | 2015-10-23 | 2016-03-09 | 国网山西省电力公司大同供电公司 | Omnidirectional fault automatic identification method of isolation switch |
CN105551010A (en) * | 2016-01-20 | 2016-05-04 | 中国矿业大学 | Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network) |
-
2016
- 2016-09-06 CN CN201610803598.7A patent/CN106327459B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020131648A1 (en) * | 2001-03-13 | 2002-09-19 | Tadao Hayashide | Image processing apparatus and image processing method |
CN101697231A (en) * | 2009-10-29 | 2010-04-21 | 西北工业大学 | Wavelet transformation and multi-channel PCNN-based hyperspectral image fusion method |
CN104077762A (en) * | 2014-06-26 | 2014-10-01 | 桂林电子科技大学 | Multi-focusing-image fusion method based on NSST and focusing area detecting |
CN104463821A (en) * | 2014-11-28 | 2015-03-25 | 中国航空无线电电子研究所 | Method for fusing infrared image and visible light image |
CN104616261A (en) * | 2015-02-09 | 2015-05-13 | 内蒙古科技大学 | Method for fusing Shearlet domain multi-spectral and full-color images based on spectral characteristics |
CN104978724A (en) * | 2015-04-02 | 2015-10-14 | 中国人民解放军63655部队 | Infrared polarization fusion method based on multi-scale transformation and pulse coupled neural network |
CN105388414A (en) * | 2015-10-23 | 2016-03-09 | 国网山西省电力公司大同供电公司 | Omnidirectional fault automatic identification method of isolation switch |
CN105551010A (en) * | 2016-01-20 | 2016-05-04 | 中国矿业大学 | Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network) |
Non-Patent Citations (4)
Title |
---|
LIANG XU ET AL.: "Feature-Based Image Fusion With a Uniform Discrete Curvelet Transform", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》 * |
徐亮: "多传感器运动图像的跨尺度分析与融合研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
樊雅平 黄生学: "基于Mean-shift和DoG的卡通化图像生成算法", 《煤炭技术》 * |
王金玲: "基于多分辨率分析的遥感图像融合算法研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106981059A (en) * | 2017-03-30 | 2017-07-25 | 中国矿业大学 | With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing |
CN107194904B (en) * | 2017-05-09 | 2019-07-19 | 西北工业大学 | NSCT area image fusion method based on supplement mechanism and PCNN |
CN107194904A (en) * | 2017-05-09 | 2017-09-22 | 西北工业大学 | NSCT area image fusion methods based on supplement mechanism and PCNN |
CN107451984A (en) * | 2017-07-27 | 2017-12-08 | 桂林电子科技大学 | A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis |
CN107886488A (en) * | 2017-12-04 | 2018-04-06 | 国网山东省电力公司电力科学研究院 | Based on AUV image interfusion methods, processor and the system for improving PCNN compensation |
CN108717689A (en) * | 2018-05-16 | 2018-10-30 | 北京理工大学 | Middle LONG WAVE INFRARED image interfusion method and device applied to naval vessel detection field under sky and ocean background |
CN108681722A (en) * | 2018-05-24 | 2018-10-19 | 辽宁工程技术大学 | A kind of finger vein features matching process based on texture |
CN108681722B (en) * | 2018-05-24 | 2021-09-21 | 辽宁工程技术大学 | Finger vein feature matching method based on texture |
CN109064436A (en) * | 2018-07-10 | 2018-12-21 | 西安天盈光电科技有限公司 | Image interfusion method |
CN109242812A (en) * | 2018-09-11 | 2019-01-18 | 中国科学院长春光学精密机械与物理研究所 | Image interfusion method and device based on conspicuousness detection and singular value decomposition |
CN110298807A (en) * | 2019-07-05 | 2019-10-01 | 福州大学 | Based on the domain the NSCT infrared image enhancing method for improving Retinex and quantum flora algorithm |
CN110874581A (en) * | 2019-11-18 | 2020-03-10 | 长春理工大学 | Image fusion method for bioreactor of cell factory |
CN110874581B (en) * | 2019-11-18 | 2023-08-01 | 长春理工大学 | Image fusion method for bioreactor of cell factory |
CN111429391A (en) * | 2020-03-23 | 2020-07-17 | 西安科技大学 | Infrared and visible light image fusion method, fusion system and application |
CN111429391B (en) * | 2020-03-23 | 2023-04-07 | 西安科技大学 | Infrared and visible light image fusion method, fusion system and application |
CN111815550A (en) * | 2020-07-04 | 2020-10-23 | 淮阴师范学院 | Infrared and visible light image fusion method based on gray level co-occurrence matrix |
CN111815550B (en) * | 2020-07-04 | 2023-09-15 | 淮阴师范学院 | Infrared and visible light image fusion method based on gray level co-occurrence matrix |
CN115578304A (en) * | 2022-12-12 | 2023-01-06 | 四川大学 | Multi-band image fusion method and system combining saliency region detection |
CN116188975A (en) * | 2023-01-03 | 2023-05-30 | 国网江西省电力有限公司电力科学研究院 | Power equipment fault identification method and system based on air-ground visual angle fusion |
Also Published As
Publication number | Publication date |
---|---|
CN106327459B (en) | 2019-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106327459A (en) | Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) | |
CN107194904B (en) | NSCT area image fusion method based on supplement mechanism and PCNN | |
Li et al. | Infrared and visible image fusion using a deep learning framework | |
CN107341786B (en) | The infrared and visible light image fusion method that wavelet transformation and joint sparse indicate | |
CN107451984A (en) | A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis | |
CN111709902A (en) | Infrared and visible light image fusion method based on self-attention mechanism | |
CN104408700A (en) | Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images | |
CN113837974B (en) | NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm | |
CN112950518B (en) | Image fusion method based on potential low-rank representation nested rolling guide image filtering | |
CN105551010A (en) | Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network) | |
CN109636766A (en) | Polarization differential and intensity image Multiscale Fusion method based on marginal information enhancing | |
CN101231748A (en) | Image anastomosing method based on singular value decomposition | |
CN109410157A (en) | The image interfusion method with PCNN is decomposed based on low-rank sparse | |
CN103971329A (en) | Cellular nerve network with genetic algorithm (GACNN)-based multisource image fusion method | |
CN107886488A (en) | Based on AUV image interfusion methods, processor and the system for improving PCNN compensation | |
CN105139371A (en) | Multi-focus image fusion method based on transformation between PCNN and LP | |
CN113298147B (en) | Image fusion method and device based on regional energy and intuitionistic fuzzy set | |
Liu et al. | An effective wavelet-based scheme for multi-focus image fusion | |
CN109961408A (en) | The photon counting Image denoising algorithm filtered based on NSCT and Block- matching | |
CN105809650A (en) | Bidirectional iteration optimization based image integrating method | |
Junwu et al. | An infrared and visible image fusion algorithm based on LSWT-NSST | |
CN103198456B (en) | Remote sensing image fusion method based on directionlet domain hidden Markov tree (HMT) model | |
Su et al. | GeFuNet: A knowledge-guided deep network for the infrared and visible image fusion | |
CN106530277A (en) | Image fusion method based on wavelet direction correlation coefficient | |
Liu et al. | SI-SA GAN: A generative adversarial network combined with spatial information and self-attention for removing thin cloud in optical remote sensing images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |