CN107451984A - A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis - Google Patents
A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis Download PDFInfo
- Publication number
- CN107451984A CN107451984A CN201710621620.0A CN201710621620A CN107451984A CN 107451984 A CN107451984 A CN 107451984A CN 201710621620 A CN201710621620 A CN 201710621620A CN 107451984 A CN107451984 A CN 107451984A
- Authority
- CN
- China
- Prior art keywords
- band
- frequency sub
- fusion
- image
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 31
- 238000002156 mixing Methods 0.000 title claims abstract description 13
- 238000004458 analytical method Methods 0.000 title claims abstract description 11
- 230000000007 visual effect Effects 0.000 title claims abstract description 11
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 14
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 230000002708 enhancing effect Effects 0.000 claims abstract description 7
- 238000000844 transformation Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 18
- 238000000034 method Methods 0.000 claims description 11
- 238000005259 measurement Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000010304 firing Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 210000002569 neuron Anatomy 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 230000000052 comparative effect Effects 0.000 claims description 3
- 230000008602 contraction Effects 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000000155 melt Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
Abstract
The invention discloses a kind of infrared and visual image fusion algorithm based on mixing multiscale analysis, comprise the following steps:Step 1:NSCT decomposition is carried out with visible images to infrared, obtains low frequency sub-band and high-frequency sub-band;Step 2:Stationary wavelet transform is used to low frequency sub-band, obtains a low frequency sub-band and three high-frequency sub-bands, local energy is respectively adopted and takes big be combined to be merged with compressive sensing theory to low, high-frequency sub-band with absolute value;Step 3:Judge the definition of image to be fused, the LSCN enhancing number of plies is chosen according to decision rule;Step 4:Big fusion rule is taken using absolute value to top high-frequency sub-band, remaining subband is merged using PCNN models are improved;Step 5:NSCT inverse transformations are carried out to fusion results, obtain final fused images.Fused images edge that the present invention obtains protrudes, and contrast is high, and target protrudes, and the index such as the average gradient of algorithm, spatial frequency is above prior art.
Description
Technical field
The invention belongs to technical field of image processing, and in particular to it is a kind of based on mixing multiscale analysis it is infrared with it is visible
Light image blending algorithm.
Background technology
Image interfusion method based on wavelet transformation is a kind of classical blending algorithm, but small echo can only represent isotropism
Object, for features such as image center line, edges, small echo is not a kind of preferable representational tool.Contourlet melts in image
Application in conjunction is relatively broad, by can be good at capturing to the multiple dimensioned of image, multi-direction decomposition, Contourlet
Minutia in image, it compensate for the deficiency of small echo this respect.But due to employing down-sampled behaviour in contourlet transformation
Make, it is not possessed translation invariance, Pseudo-Gibbs artifacts are easily produced in image procossing.
Nonsubsampled contourlet transform (the nonsubsampled Contourlet of the propositions such as A.L.Cunha
Transform, NSCT) possess translation invariance, it can fully retain the effective information of image, produce preferably fusion effect
Fruit, but also there is low frequency part image sparse is poor, the problems such as being unfavorable for feature extraction.
The content of the invention
In view of the shortcomings of the prior art, problem solved by the invention is that how to solve in infrared and visual image fusion
Existing contrast is not high, and marginal information retains the problems such as not abundant enough.
In order to solve the above technical problems, the technical solution adopted by the present invention be it is a kind of based on mixing multiscale analysis it is red
Outside with visual image fusion algorithm, comprise the following steps:
Step 1:NSCT decomposition is carried out respectively with visible images to infrared, obtains low frequency sub-band LJ(x, y) and high frequency
Band Hj,r(x, y), wherein J are Decomposition order, and j, r represent decomposition scale and direction number.
Step 2:Stationary wavelet transform is used to low frequency sub-band, obtains a low frequency sub-band and three high-frequency sub-bands, respectively
Take big be combined to be merged with compressive sensing theory to low, high-frequency sub-band with absolute value using local energy, then carry out small echo
Inverse transformation obtains the low frequency sub-band of NSCT reconstruct.
It is described to take big be combined to be merged with compressive sensing theory to low frequency sub-band with absolute value using local energy, its
Specific method is as follows:
EN is energy of local area in formula, and it is defined as:
It is described to take big be combined to be merged with compressive sensing theory to high-frequency sub-band with absolute value using local energy, its
Comprise the following steps that:
1) high-frequency sub-band images by size for m × nWithResolve into non-overlapping copies and size phase
Same sub-block, wherein j=1,2,3, rarefaction is carried out to each piece of sub-image using sym8 wavelet basis;
2) calculation matrix Φ is designed, sampling is measured to the high-frequency sub-band coefficient of input using calculation matrix, surveyed
Amount vectorWithWherein k=1,2 ..., m × n;
3) measurement vector is calculatedWithStandard deviation SDkWith definition EAVk, using based on regional standard is poor, region is clear
The fusion rule that clear degree and S function are combined, the measurement vector merged, i.e.,:
Graphics standard difference formula is:
Wherein
Image definition formula is:
Weight coefficient ω is obtained by S function, the S function used for:
Wherein,
F is the contraction factor of S function, and f is more than or equal to 1, takes f=5;
4) to the measurement vector of fusionSparse reconstruction is carried out, algorithm for reconstructing uses OMP, so as to obtain the height of fused images
Frequency subband
The SW that will be obtainedFWithStationary wavelet is carried out to reconstruct to obtain the low frequency sub-band eventually for NSCT reconstruct.
Step 3:Judge the definition of image to be fused, the LSCN enhancing number of plies is chosen according to decision rule, its specific side
Method is as follows:
Image definition formula is:
Image definition is calculated and by it compared with threshold value λ according to formula (8), high frequency coefficient enhancing is determined according to comparative result
The number of plies, i.e.,:
Wherein J is Decomposition order, and S is the synthesis definition of source images, takes α1=α2=0.5, λ=27.
Step 4:Big fusion rule is taken using absolute value to top high-frequency sub-band, remaining subband is using improvement PCNN
Model is merged, and specific fusion rule is as follows:
It is improved to remaining subband in addition to top high-frequency sub-band n, use in order to improve the vision perception of image
PCNN models are merged, and determine fusion coefficients by comparing the igniting amplitude sum of PCNN neurons, i.e.,:
Wherein Mij(n) for PCNN output pulse firing amplitude summation, j=1,2 ..., n-1, ε be self-defined threshold value, take
ε=0.002.
Because traditional PCNN output uses hard-limiting function, it is impossible to reflect the amplitude difference of neuron firing, this hair
It is bright using output of the Sigmoid functions as PCNN, can preferably portray difference when lock-out pulse excites in igniting amplitude
Different, PCNN output is defined as follows:
In order to preferably represent the marginal information of image, from improved Laplce's energy (SML) and local space frequency
Outside input and link coefficient of the rate respectively as PCNN.SML is defined as follows:
Spatial frequency is:
Wherein RF, CF, MDF and SDF represent line frequency, row frequency, main diagonal frequencies and secondary diagonal frequencies, its formula respectively
It is as follows:
Step 5:Obtained low frequency sub-band will be merged and high-frequency sub-band carries out NSCT inverse transformations, obtain final fusion figure
Picture.
The fused images edge obtained using technical scheme is protruded, and contrast and brightness are higher, and target protrudes,
The method that average gradient, spatial frequency, standard deviation and the comentropy of algorithm are above prior art, both effectively remained infrared
Target, and can effectively obtain the spatial-domain information of source images, have obtained preferable syncretizing effect.
Brief description of the drawings
Fig. 1 is flow chart of the present invention;
Fig. 2 is the source infrared image of embodiment one;
Fig. 3 is the source visible images of embodiment one;
Fig. 4 is the fused images that the document 6 of embodiment one obtains;
Fig. 5 is the fused images that the document 8 of embodiment one obtains;
Fig. 6 is the fused images that the document 12 of embodiment one obtains;
Fig. 7 is the fused images that the inventive algorithm of embodiment one obtains;
Fig. 8 is the source infrared image of embodiment two;
Fig. 9 is the source visible images of embodiment two;
Figure 10 is the fused images that the document 6 of embodiment two obtains;
Figure 11 is the fused images that the document 8 of embodiment two obtains;
Figure 12 is the fused images that the document 12 of embodiment two obtains;
Figure 13 is the fused images that the inventive algorithm of embodiment two obtains.
Embodiment
The embodiment of the present invention is further described with reference to the accompanying drawings and examples, but is not to this hair
Bright restriction.
Fig. 1 shows flow of the present invention, a kind of infrared and visual image fusion algorithm based on mixing multiscale analysis,
Comprise the following steps:
Step 1:NSCT decomposition is carried out respectively with visible images to infrared, obtains low frequency sub-band LJ(x, y) and high frequency
Band Hj,r(x, y), wherein J are Decomposition order, and j, r represent decomposition scale and direction number.
Step 2:Stationary wavelet transform is used to low frequency sub-band, obtains a low frequency sub-band and three high-frequency sub-bands, respectively
Take big be combined to be merged with compressive sensing theory to low, high-frequency sub-band with absolute value using local energy, then carry out small echo
Inverse transformation obtains the low frequency sub-band of NSCT reconstruct.
It is described to take big be combined to be merged with compressive sensing theory to low frequency sub-band with absolute value using local energy, its
Specific method is as follows:
EN is energy of local area in formula, and it is defined as:
It is described to take big be combined to be merged with compressive sensing theory to high-frequency sub-band with absolute value using local energy, its
Comprise the following steps that:
1) high-frequency sub-band images by size for m × nWithResolve into non-overlapping copies and size phase
Same sub-block, wherein j=1,2,3, rarefaction is carried out to each piece of sub-image using sym8 wavelet basis;
2) calculation matrix Φ is designed, sampling is measured to the high-frequency sub-band coefficient of input using calculation matrix, surveyed
Amount vectorWithWherein k=1,2 ..., m × n;
3) measurement vector is calculatedWithStandard deviation SDkWith definition EAVk, using based on regional standard is poor, region is clear
The fusion rule that clear degree and S function are combined, the measurement vector merged, i.e.,:
Graphics standard difference formula is:
Wherein
Image definition formula is:
Weight coefficient ω is obtained by S function, the S function used for:
Wherein,
F is the contraction factor of S function, and f is more than or equal to 1, takes f=5;
4) to the measurement vector of fusionSparse reconstruction is carried out, algorithm for reconstructing uses OMP, so as to obtain the height of fused images
Frequency subband
The SW that will be obtainedFWithStationary wavelet is carried out to reconstruct to obtain the low frequency sub-band eventually for NSCT reconstruct.
Step 3:Judge the definition of image to be fused, the LSCN enhancing number of plies is chosen according to decision rule, its specific side
Method is as follows:
Image definition formula is:
Image definition is calculated and by it compared with threshold value λ according to formula (8), high frequency coefficient enhancing is determined according to comparative result
The number of plies, i.e.,:
Wherein J is Decomposition order, and S is the synthesis definition of source images, takes α1=α2=0.5, λ=27.
Step 4:Big fusion rule is taken using absolute value to top high-frequency sub-band, remaining subband is using improvement PCNN
Model is merged, and specific fusion rule is as follows:
It is improved to remaining subband in addition to top high-frequency sub-band n, use in order to improve the vision perception of image
PCNN models are merged, and determine fusion coefficients by comparing the igniting amplitude sum of PCNN neurons, i.e.,:
Wherein Mij(n) for PCNN output pulse firing amplitude summation, j=1,2 ..., n-1, ε be self-defined threshold value, take
ε=0.002.
Because traditional PCNN output uses hard-limiting function, it is impossible to reflect the amplitude difference of neuron firing, this hair
It is bright using output of the Sigmoid functions as PCNN, can preferably portray difference when lock-out pulse excites in igniting amplitude
Different, PCNN output is defined as follows:
In order to preferably represent the marginal information of image, from improved Laplce's energy (SML) and local space frequency
Outside input and link coefficient of the rate respectively as PCNN.SML is defined as follows:
Spatial frequency is:
Wherein RF, CF, MDF and SDF represent line frequency, row frequency, main diagonal frequencies and secondary diagonal frequencies, its formula respectively
It is as follows:
Step 5:Obtained low frequency sub-band will be merged and high-frequency sub-band carries out NSCT inverse transformations, obtain final fusion figure
Picture.
Infrared image have recorded the infra-red radiation information of target object, under low-light (level) or camouflage target have it is stronger
Recognition capability, but it changes not enough sensitivity to brightness.Visible images are larger by illumination effect, can provide target scene
Detailed information.Therefore merged infrared with visible images, respective advantage can be combined, obtain a width clear background,
The complementary image that target protrudes, so as to which the person of facilitating look at carries out more accurate, comprehensive description to the scene.
First, the experimental data of second embodiment is as follows:
Fig. 2 be first embodiment source infrared image, Fig. 3 be first embodiment source visible images, Fig. 4 data
For the document 6 of table 1, Fig. 5 data are the document 8 of table 1, and Fig. 6 data are the document 12 of table 1, and Fig. 7 data are the sheet of table 1
Invention algorithm.
Fig. 8 be second embodiment source infrared image, Fig. 9 be second embodiment source visible images, Figure 10 data
For the document 6 of table 2, Figure 11 data are the document 8 of table 2, and Figure 12 data are the document 12 of table 2, and Figure 13 data are table 2
Inventive algorithm.
Objective evaluation is analyzed, it can be seen that the method items evaluation index that the present embodiment proposes is better than it from table 1,2
Its method, learn that the present embodiment syncretizing effect more meets the visually-perceptible of the mankind more than.
1 first group of image co-registration evaluation of result of table:
2 second groups of image co-registration evaluation of result of table:
The fused images edge obtained using technical scheme is protruded, and contrast and brightness are higher, and target protrudes,
The method that average gradient, spatial frequency, standard deviation and the comentropy of algorithm are above prior art, both effectively remained infrared
Target, and can effectively obtain the spatial-domain information of source images, have obtained preferable syncretizing effect.
Embodiments of the present invention are made that with detailed description above in association with accompanying drawing, but the present invention be not limited to it is described
Embodiment.To those skilled in the art, without departing from the principles and spirit of the present invention, these are implemented
Mode carries out various change, modification, replacement and modification and still fallen within protection scope of the present invention.
Claims (5)
- A kind of 1. infrared and visual image fusion algorithm based on mixing multiscale analysis, it is characterised in that:Including following step Suddenly:Step 1:NSCT decomposition is carried out respectively with visible images to infrared, obtains low frequency sub-band LJ(x, y) and high-frequency sub-band Hj,r (x, y), wherein J are Decomposition order, and j, r represent decomposition scale and direction number;Step 2:Stationary wavelet transform is used to low frequency sub-band, a low frequency sub-band and three high-frequency sub-bands is obtained, is respectively adopted Local energy takes big be combined to be merged with compressive sensing theory to low, high-frequency sub-band with absolute value, then carries out small echo inversion Get the low frequency sub-band of NSCT reconstruct in return;Step 3:Judge the definition of image to be fused, the LSCN enhancing number of plies is chosen according to decision rule;Step 4:Big fusion rule is taken using absolute value to top high-frequency sub-band, remaining subband enters using PCNN models are improved Row fusion;Step 5:Obtained low frequency sub-band will be merged and high-frequency sub-band carries out NSCT inverse transformations, obtain final fused images.
- 2. the infrared and visual image fusion algorithm according to claim 1 based on mixing multiscale analysis, its feature It is:In step 2, it is described to take big be combined to be carried out with compressive sensing theory to low frequency sub-band with absolute value using local energy Fusion, its specific method are as follows:EN is energy of local area in formula, and it is defined as:
- 3. the infrared and visual image fusion algorithm according to claim 1 based on mixing multiscale analysis, its feature It is:In step 2, it is described to take big be combined to be carried out with compressive sensing theory to high-frequency sub-band with absolute value using local energy Fusion, it is comprised the following steps that:1) high-frequency sub-band images by size for m × nWithResolve into non-overlapping copies and size identical Sub-block, wherein j=1,2,3, rarefaction is carried out to each piece of sub-image using sym8 wavelet basis;2) design calculation matrix Φ, sampling measured to the high-frequency sub-band coefficient of input using calculation matrix, obtain measuring to AmountWithWherein k=1,2 ..., m × n;3) measurement vector is calculatedWithStandard deviation SDkWith definition EAVk, using based on regional standard is poor, region definition The fusion rule being combined with S function, the measurement vector merged, i.e.,:Graphics standard difference formula is:WhereinImage definition formula is:Weight coefficient ω is obtained by S function, the S function used for:Wherein,F is the contraction factor of S function, and f is more than or equal to 1, takes f=5;4) to the measurement vector of fusionSparse reconstruction is carried out, algorithm for reconstructing uses OMP, so as to obtain the high frequency of fused images BandThe SW that will be obtainedFWithStationary wavelet is carried out to reconstruct to obtain the low frequency sub-band eventually for NSCT reconstruct.
- 4. the infrared and visual image fusion algorithm according to claim 1 based on mixing multiscale analysis, its feature It is:In step 3, specific method is as follows:Image definition formula is:Image definition is calculated and by it compared with threshold value λ according to formula (8), the layer of high frequency coefficient enhancing is determined according to comparative result Number, i.e.,:Wherein J is Decomposition order, and S is the synthesis definition of source images, takes α1=α2=0.5, λ=27.
- 5. the infrared and visual image fusion algorithm according to claim 1 based on mixing multiscale analysis, its feature It is:In step 4, specific fusion rule is as follows:In order to improve the vision perception of image, to remaining subband in addition to top high-frequency sub-band n, using improved PCNN moulds Type is merged, and determines fusion coefficients by comparing the igniting amplitude sum of PCNN neurons, i.e.,:Wherein Mij(n) for PCNN output pulse firing amplitude summation, j=1,2 ..., n-1, ε be self-defined threshold value, take ε= 0.002;Because traditional PCNN output uses hard-limiting function, it is impossible to reflect the amplitude difference of neuron firing, use Output of the Sigmoid functions as PCNN, it can preferably portray difference when lock-out pulse excites in igniting amplitude, PCNN Output be defined as follows:In order to preferably represent the marginal information of image, distinguished from improved Laplce's energy SML and local spatial frequencies Outside input and link coefficient as PCNN;SML is defined as follows:Spatial frequency is:Wherein RF, CF, MDF and SDF represent line frequency, row frequency, main diagonal frequencies and secondary diagonal frequencies respectively, and its formula is such as Under:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710621620.0A CN107451984B (en) | 2017-07-27 | 2017-07-27 | Infrared and visible light image fusion algorithm based on mixed multi-scale analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710621620.0A CN107451984B (en) | 2017-07-27 | 2017-07-27 | Infrared and visible light image fusion algorithm based on mixed multi-scale analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107451984A true CN107451984A (en) | 2017-12-08 |
CN107451984B CN107451984B (en) | 2021-06-22 |
Family
ID=60489702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710621620.0A Active CN107451984B (en) | 2017-07-27 | 2017-07-27 | Infrared and visible light image fusion algorithm based on mixed multi-scale analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107451984B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108389158A (en) * | 2018-02-12 | 2018-08-10 | 河北大学 | A kind of infrared and visible light image interfusion method |
CN108648174A (en) * | 2018-04-04 | 2018-10-12 | 上海交通大学 | A kind of fusion method of multilayer images and system based on Autofocus Technology |
CN109035189A (en) * | 2018-07-17 | 2018-12-18 | 桂林电子科技大学 | Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function |
CN109118460A (en) * | 2018-06-27 | 2019-01-01 | 河海大学 | A kind of light splitting polarization spectrum synchronizing information processing method and system |
CN109166088A (en) * | 2018-07-10 | 2019-01-08 | 南京理工大学 | Two waveband gray scale crater image fusion method based on non-down-sampled wavelet transformation |
CN109191417A (en) * | 2018-09-11 | 2019-01-11 | 中国科学院长春光学精密机械与物理研究所 | It is detected based on conspicuousness and improves twin-channel method for self-adaption amalgamation and device |
CN109242815A (en) * | 2018-09-28 | 2019-01-18 | 合肥英睿系统技术有限公司 | A kind of infrared light image and visible light image fusion method and system |
CN109345788A (en) * | 2018-09-26 | 2019-02-15 | 国网安徽省电力有限公司铜陵市义安区供电公司 | A kind of monitoring early-warning system of view-based access control model feature |
CN109360182A (en) * | 2018-10-31 | 2019-02-19 | 广州供电局有限公司 | Image interfusion method, device, computer equipment and storage medium |
CN109410157A (en) * | 2018-06-19 | 2019-03-01 | 昆明理工大学 | The image interfusion method with PCNN is decomposed based on low-rank sparse |
CN109978802A (en) * | 2019-02-13 | 2019-07-05 | 中山大学 | High dynamic range images fusion method in compressed sensing domain based on NSCT and PCNN |
CN110110786A (en) * | 2019-05-06 | 2019-08-09 | 电子科技大学 | A kind of infrared and visible light image fusion method based on NSCT and DWT |
CN111861957A (en) * | 2020-07-02 | 2020-10-30 | Tcl华星光电技术有限公司 | Image fusion method and device |
CN112734683A (en) * | 2021-01-07 | 2021-04-30 | 西安电子科技大学 | Multi-scale SAR and infrared image fusion method based on target enhancement |
CN114359687A (en) * | 2021-12-07 | 2022-04-15 | 华南理工大学 | Target detection method, device, equipment and medium based on multi-mode data dual fusion |
CN114757895A (en) * | 2022-03-25 | 2022-07-15 | 国网浙江省电力有限公司电力科学研究院 | Composite insulator infrared image sunlight direct interference judgment method and system |
CN116403057A (en) * | 2023-06-09 | 2023-07-07 | 山东瑞盈智能科技有限公司 | Power transmission line inspection method and system based on multi-source image fusion |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1873693A (en) * | 2006-06-27 | 2006-12-06 | 上海大学 | Method based on Contourlet transformation, modified type pulse coupling neural network, and mage amalgamation |
CN102254314A (en) * | 2011-07-17 | 2011-11-23 | 西安电子科技大学 | Visible-light/infrared image fusion method based on compressed sensing |
US20140341481A1 (en) * | 2013-03-15 | 2014-11-20 | Karen A. Panetta | Methods and Apparatus for Image Processing and Analysis |
WO2016050290A1 (en) * | 2014-10-01 | 2016-04-07 | Metaio Gmbh | Method and system for determining at least one property related to at least part of a real environment |
CN105719263A (en) * | 2016-01-22 | 2016-06-29 | 昆明理工大学 | Visible light and infrared image fusion algorithm based on NSCT domain bottom layer visual features |
CN106327459A (en) * | 2016-09-06 | 2017-01-11 | 四川大学 | Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) |
CN106600572A (en) * | 2016-12-12 | 2017-04-26 | 长春理工大学 | Adaptive low-illumination visible image and infrared image fusion method |
CN106981057A (en) * | 2017-03-24 | 2017-07-25 | 中国人民解放军国防科学技术大学 | A kind of NSST image interfusion methods based on RPCA |
-
2017
- 2017-07-27 CN CN201710621620.0A patent/CN107451984B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1873693A (en) * | 2006-06-27 | 2006-12-06 | 上海大学 | Method based on Contourlet transformation, modified type pulse coupling neural network, and mage amalgamation |
CN102254314A (en) * | 2011-07-17 | 2011-11-23 | 西安电子科技大学 | Visible-light/infrared image fusion method based on compressed sensing |
US20140341481A1 (en) * | 2013-03-15 | 2014-11-20 | Karen A. Panetta | Methods and Apparatus for Image Processing and Analysis |
WO2016050290A1 (en) * | 2014-10-01 | 2016-04-07 | Metaio Gmbh | Method and system for determining at least one property related to at least part of a real environment |
CN105719263A (en) * | 2016-01-22 | 2016-06-29 | 昆明理工大学 | Visible light and infrared image fusion algorithm based on NSCT domain bottom layer visual features |
CN106327459A (en) * | 2016-09-06 | 2017-01-11 | 四川大学 | Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) |
CN106600572A (en) * | 2016-12-12 | 2017-04-26 | 长春理工大学 | Adaptive low-illumination visible image and infrared image fusion method |
CN106981057A (en) * | 2017-03-24 | 2017-07-25 | 中国人民解放军国防科学技术大学 | A kind of NSST image interfusion methods based on RPCA |
Non-Patent Citations (8)
Title |
---|
ZHANWEN LIU 等: "A fusion Algorithm for infrared and visible images based on RDU-PCNN and ICA-bases in NSST domain", 《INFRARED PHYSICS & TECHNOLOGY》 * |
刘战文 等: "一种基于NSST和字典学习的红外和可见光图像融合算法", 《西北工业大学学报》 * |
李祚林 等: "面向无参考图像的清晰度评价方法研究", 《遥感技术与应用》 * |
殷明 等: "结合NSDTCT 和压缩感知PCNN 的图像融合算法", 《计算机辅助设计与图形学学报》 * |
邢笑雪: "基于NSST的图像融合算法研究", 《中国博士学位论文全文数据库信息科技辑》 * |
闫利 等: "NSCT域内结合边缘特征和自适应PCNN的红外与可见光图像融合", 《电子学报》 * |
陈震 等: "基于补偿机制的NSCT域红外与可见光图像融合", 《仪器仪表学报》 * |
龚昌来: "多个激活度量相结合的小波图像融合方法", 《光电工程》 * |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108389158A (en) * | 2018-02-12 | 2018-08-10 | 河北大学 | A kind of infrared and visible light image interfusion method |
CN108648174A (en) * | 2018-04-04 | 2018-10-12 | 上海交通大学 | A kind of fusion method of multilayer images and system based on Autofocus Technology |
CN109410157B (en) * | 2018-06-19 | 2022-02-08 | 昆明理工大学 | Image fusion method based on low-rank sparse decomposition and PCNN |
CN109410157A (en) * | 2018-06-19 | 2019-03-01 | 昆明理工大学 | The image interfusion method with PCNN is decomposed based on low-rank sparse |
CN109118460B (en) * | 2018-06-27 | 2020-08-11 | 河海大学 | Method and system for synchronously processing light-splitting polarization spectrum information |
CN109118460A (en) * | 2018-06-27 | 2019-01-01 | 河海大学 | A kind of light splitting polarization spectrum synchronizing information processing method and system |
CN109166088A (en) * | 2018-07-10 | 2019-01-08 | 南京理工大学 | Two waveband gray scale crater image fusion method based on non-down-sampled wavelet transformation |
CN109035189A (en) * | 2018-07-17 | 2018-12-18 | 桂林电子科技大学 | Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function |
CN109035189B (en) * | 2018-07-17 | 2021-07-23 | 桂林电子科技大学 | Infrared and weak visible light image fusion method based on Cauchy fuzzy function |
CN109191417A (en) * | 2018-09-11 | 2019-01-11 | 中国科学院长春光学精密机械与物理研究所 | It is detected based on conspicuousness and improves twin-channel method for self-adaption amalgamation and device |
CN109345788A (en) * | 2018-09-26 | 2019-02-15 | 国网安徽省电力有限公司铜陵市义安区供电公司 | A kind of monitoring early-warning system of view-based access control model feature |
CN109242815A (en) * | 2018-09-28 | 2019-01-18 | 合肥英睿系统技术有限公司 | A kind of infrared light image and visible light image fusion method and system |
CN109242815B (en) * | 2018-09-28 | 2022-03-18 | 合肥英睿系统技术有限公司 | Infrared light image and visible light image fusion method and system |
CN109360182A (en) * | 2018-10-31 | 2019-02-19 | 广州供电局有限公司 | Image interfusion method, device, computer equipment and storage medium |
CN109978802A (en) * | 2019-02-13 | 2019-07-05 | 中山大学 | High dynamic range images fusion method in compressed sensing domain based on NSCT and PCNN |
CN110110786A (en) * | 2019-05-06 | 2019-08-09 | 电子科技大学 | A kind of infrared and visible light image fusion method based on NSCT and DWT |
CN110110786B (en) * | 2019-05-06 | 2023-04-14 | 电子科技大学 | Infrared and visible light image fusion method based on NSCT and DWT |
CN111861957A (en) * | 2020-07-02 | 2020-10-30 | Tcl华星光电技术有限公司 | Image fusion method and device |
CN111861957B (en) * | 2020-07-02 | 2024-03-08 | Tcl华星光电技术有限公司 | Image fusion method and device |
CN112734683B (en) * | 2021-01-07 | 2024-02-20 | 西安电子科技大学 | Multi-scale SAR and infrared image fusion method based on target enhancement |
CN112734683A (en) * | 2021-01-07 | 2021-04-30 | 西安电子科技大学 | Multi-scale SAR and infrared image fusion method based on target enhancement |
CN114359687A (en) * | 2021-12-07 | 2022-04-15 | 华南理工大学 | Target detection method, device, equipment and medium based on multi-mode data dual fusion |
CN114359687B (en) * | 2021-12-07 | 2024-04-09 | 华南理工大学 | Target detection method, device, equipment and medium based on multi-mode data double fusion |
CN114757895A (en) * | 2022-03-25 | 2022-07-15 | 国网浙江省电力有限公司电力科学研究院 | Composite insulator infrared image sunlight direct interference judgment method and system |
CN116403057B (en) * | 2023-06-09 | 2023-08-18 | 山东瑞盈智能科技有限公司 | Power transmission line inspection method and system based on multi-source image fusion |
CN116403057A (en) * | 2023-06-09 | 2023-07-07 | 山东瑞盈智能科技有限公司 | Power transmission line inspection method and system based on multi-source image fusion |
Also Published As
Publication number | Publication date |
---|---|
CN107451984B (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107451984A (en) | A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis | |
CN106846289B (en) | A kind of infrared light intensity and polarization image fusion method | |
CN106327459A (en) | Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) | |
CN104408700A (en) | Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images | |
CN108921809B (en) | Multispectral and panchromatic image fusion method based on spatial frequency under integral principle | |
CN106960428A (en) | Visible ray and infrared double-waveband image co-registration Enhancement Method | |
CN104268833B (en) | Image interfusion method based on translation invariant shearing wave conversion | |
CN109102485A (en) | Image interfusion method and device based on NSST and adaptive binary channels PCNN | |
CN109801250A (en) | Infrared and visible light image fusion method based on ADC-SCM and low-rank matrix expression | |
CN104123705B (en) | A kind of super-resolution rebuilding picture quality Contourlet territory evaluation methodology | |
CN103020933B (en) | A kind of multisource image anastomosing method based on bionic visual mechanism | |
CN108765280A (en) | A kind of high spectrum image spatial resolution enhancement method | |
CN112950518B (en) | Image fusion method based on potential low-rank representation nested rolling guide image filtering | |
CN109961408B (en) | Photon counting image denoising method based on NSCT and block matching filtering | |
Zhang et al. | A multi-modal image fusion framework based on guided filter and sparse representation | |
Yadav et al. | A review on image fusion methodologies and applications | |
CN105825491A (en) | Image fusion method based on hybrid model | |
CN105809650A (en) | Bidirectional iteration optimization based image integrating method | |
CN104156930B (en) | Image fusion method and apparatus based on dual-scale space | |
Luo et al. | Multi-modal image fusion via deep laplacian pyramid hybrid network | |
Ye et al. | An unsupervised SAR and optical image fusion network based on structure-texture decomposition | |
CN103198456A (en) | Remote sensing image fusion method based on directionlet domain hidden Markov tree (HMT) model | |
Gao et al. | Infrared and visible image fusion using dual-tree complex wavelet transform and convolutional sparse representation | |
Jian et al. | Multi-source image fusion algorithm based on fast weighted guided filter | |
Yu et al. | A multi-band image synchronous fusion method based on saliency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20171208 Assignee: Guangxi Yanze Information Technology Co.,Ltd. Assignor: GUILIN University OF ELECTRONIC TECHNOLOGY Contract record no.: X2023980046249 Denomination of invention: A fusion algorithm for infrared and visible light images based on hybrid multiscale analysis Granted publication date: 20210622 License type: Common License Record date: 20231108 |
|
EE01 | Entry into force of recordation of patent licensing contract |