CN111080568B - Near infrared and color visible light image fusion algorithm based on Tetrolet transformation - Google Patents
Near infrared and color visible light image fusion algorithm based on Tetrolet transformation Download PDFInfo
- Publication number
- CN111080568B CN111080568B CN201911280623.8A CN201911280623A CN111080568B CN 111080568 B CN111080568 B CN 111080568B CN 201911280623 A CN201911280623 A CN 201911280623A CN 111080568 B CN111080568 B CN 111080568B
- Authority
- CN
- China
- Prior art keywords
- image
- frequency
- fusion
- low
- coefficient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 78
- 230000009466 transformation Effects 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 58
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000013507 mapping Methods 0.000 claims abstract description 5
- DJOWTWWHMWQATC-KYHIUUMWSA-N Karpoxanthin Natural products CC(=C/C=C/C=C(C)/C=C/C=C(C)/C=C/C1(O)C(C)(C)CC(O)CC1(C)O)C=CC=C(/C)C=CC2=C(C)CC(O)CC2(C)C DJOWTWWHMWQATC-KYHIUUMWSA-N 0.000 claims description 10
- YLQBMQCUIZJEEH-UHFFFAOYSA-N tetrahydrofuran Natural products C=1C=COC=1 YLQBMQCUIZJEEH-UHFFFAOYSA-N 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 6
- 239000000203 mixture Substances 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000007476 Maximum Likelihood Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000005477 standard model Effects 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 13
- 238000011156 evaluation Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 5
- 238000007500 overflow downdraw method Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention provides a near infrared and color visible light image fusion algorithm based on Tetrolet transformation, belongs to the technical field of image processing, and is used for solving the problems of low contrast and unclear details after the near infrared and color visible light images are fused. Firstly, converting a color visible light image into an HSI space, and respectively performing Tetrolet transformation on a brightness component and an infrared image of the color visible light image to obtain low-frequency and high-frequency subband coefficients; secondly, a low-frequency coefficient fusion rule with the maximum expected value is provided for the low-frequency sub-band coefficient, and a self-adaptive PCNN model is provided for the high-frequency sub-band coefficient as the fusion rule; obtaining a fused brightness image through the inverse Tetrolet transformation; then, a saturation component stretching method is provided; and finally, reversely mapping each processed component to an RGB space to complete image fusion. The fusion image obtained by the method has clear details, full color and obviously improved color contrast.
Description
Technical Field
The invention belongs to the technical field of image processing, and relates to a near infrared and color visible light image fusion algorithm, in particular to a near infrared and color visible light image fusion algorithm based on Tetrolet transformation.
Background
The image fusion is to fuse a plurality of source images acquired by multiple sensors together, and the fused images contain all important characteristics of the source images. The uncertainty of the image information is effectively reduced, the purpose of enhancing the image information is achieved, and the content of the image information is expanded. The fusion image has all characteristic information of the source image, and is more suitable for subsequent recognition processing and research. The infrared and visible light image fusion technology can combine the heat radiation target information in the infrared image and the scene information in the visible light image, so that the research has important significance in the military and civil fields. In the image fusion process, the problems of low contrast and unclear details exist after the near infrared and color visible light images are fused.
Many scholars at home and abroad conduct research on an image fusion algorithm, in 2010, jens Krommweh proposes Tetrolet transformation, which is a sparse image representation method developed by self-adaptive Haar wavelet transformation, has a good directional structure, can express high-dimensional texture characteristics of images, has high sparsity, and is more suitable for being used as a fusion frame in image fusion. Nemalidined proposes a PCNN-based infrared and visible light and image fusion method, wherein low frequency components are fused by using a Pulse Coupled Neural Network (PCNN), which is excited by the modified Laplace to maintain the maximum available information in two source images. The high-frequency component adopts a local logarithmic Gabor fusion rule based on energy, so that a good fusion effect is obtained. Cheng proposes a new infrared and visible light image fusion framework based on a self-adaptive double-channel unit, a pulse coupled neural network and singular value decomposition (ADS-PCNN) are applied to image fusion, and an image average gradient (IAVG) of a high-frequency component and a low-frequency component is used for respectively stimulating the ADS-PCNN, so that the problems that the spectrum difference between infrared and visible light images is large and black artifacts are easy to occur in the fused images are solved. And taking a local structure information operator (LSI) as the self-adaptive connection strength for enhancing the fusion precision, carrying out local singular value decomposition on each source image, and self-adaptively determining the iteration times.
Disclosure of Invention
The invention aims at solving the problems in the prior art, and provides a near infrared and color visible light image fusion algorithm based on Tetrolet transformation, which aims to solve the problems that: the near infrared and color visible light images are fused, and then the contrast is low and the details are unclear.
The aim of the invention can be achieved by the following technical scheme:
a near infrared and color visible light image fusion algorithm based on a tetrol transform comprises the following steps:
step one: converting visible light images from RGB space to HSI space to obtain chromaticity I respectively H Component, saturation I S Component and luminance component I b ;
Step two: processing image low frequency subband coefficients for luminance component I b And infrared image I i Respectively are provided withTetrolet transformation is carried out to obtain corresponding low-frequency coefficientsAnd->And high frequency coefficient->And->For low frequency coefficient->And->Fusing by adopting an expected maximum algorithm to obtain fused low-frequency coefficient +.>Processing image high frequency subband coefficients, for high frequency coefficients +.>And->Adopting improved self-adaptive PCNN to fuse to obtain fused high-frequency coefficient +.>For->And->Performing a Tetrolet inverse transformation to obtain a fused brightness image I f ;
Step three: for saturation degreeI S Nonlinear stretching is carried out on the components to obtain a stretched saturation component I' S ;
Step four: with I H 、I' S 、I f Replace the original chromaticity I H Component, saturation I S Component and luminance component I b And then reversely mapping to the RGB space to obtain a final fusion image.
The working principle of the invention is as follows: firstly, converting a color visible light image into an HSI space, and respectively performing Tetrolet conversion on a brightness I component and an infrared image of the color visible light image to obtain low-frequency and high-frequency subband coefficients; secondly, a low-frequency coefficient fusion rule with the maximum expected value is provided for the low-frequency sub-band coefficient of the image, a Sobel operator is adopted to adjust the threshold value of the PCNN model for the high-frequency sub-band coefficient of the image, and a self-adaptive PCNN model is provided as the fusion rule; obtaining a fused brightness image through the inverse Tetrolet transformation; then, aiming at the problem of the reduction of the saturation of the fusion image, a saturation component stretching method is provided; finally, each processed component is reversely mapped to an RGB space to finish image fusion, and the fusion image obtained by the method has clear details, full color and obviously improved color contrast.
In the step 1, the visible light image is converted from the RGB space to the HSI space by adopting a standard model method, and the specific formula is as follows:
S=Max-Min
the R, G, B components in the formula are normalized data, max represents the maximum value of (R, G, B), min represents the minimum value of (R, G, B), and H, S, I represents the converted chromaticity, saturation and brightness respectively; if H < 0, it is increased by 2π.
In the second step, in the Tetrolet transformation, the maximum value of the first-order norm is adopted for filtering to replace the minimum value of the original first-order norm for filtering, and a selection formula is as follows:
wherein G is d,(c),z Representing high frequency coefficients, S representing low frequency coefficients, c representing corresponding tetrelet splitting blocks.
Processing the low-frequency subband coefficients of the image in the second step, and for the low-frequency coefficientsAnd->Adopting an expected maximum algorithm to perform fusion, and based on low-frequency coefficient fusion of the expected maximum algorithm, searching potential distribution maximum likelihood estimation from a given incomplete data set, and applying the expected maximum algorithm to fusion of low-frequency coefficient images; assume that K low frequency images I are to be fused k K.epsilon. {1,2, …, K } comes from an unknown image F, indicating that the dataset is incomplete, I k Is:
I k (i,j)=α k (i,j)F(l)+ε k (i,j)
wherein alpha is k (i, j) ∈ { -1,0,1} is the sensor selectivity factor, ε k (i, j) is random noise at position (i, j), when the image does not have the same form, sensor selectivity factor alpha is used k :
In the desired maximum algorithm, the local noise ε k (i, j) using a mixture distribution of M Gaussian probability density functions, the formula is as follows:
the fusion step of the low-frequency coefficients in the second step is as follows:
s1, carrying out standardization and normalization processing on image data:
I' k (i,j)=(I k (i,j)-μ)H
wherein I' k And I k Respectively carrying out standard normalization on the image and the original image, wherein mu is the average value of the whole image, and H is the gray level of the image;
s2, setting initial values of all parameters, adopting a method of averaging imaging sensor images, assuming that the fused image is F,
wherein w is k The weight coefficient of the image to be fused;
the overall variance of the pixel neighborhood window l=p×q is:
the initialization variance of the Gaussian mixture model is as follows:
s3, calculating the conditional probability density of the mth term of the mixed Gaussian distribution under the condition of given parameters:
s4, updating the parameter alpha k ,α k The value of (1, 0, 1) is selected to maximize the value of the following formula,
s5, recalculating the conditional probability density distribution g m,k,l Update the real scene F (l):
s6, updating model parameters of noise:
s7, repeating the steps S3 to S6 by using the new parameters, and determining that the fused image is:
in the second step, the high-frequency subband coefficients of the image are processed, the high-frequency coefficients are fused by adopting improved self-adaptive PCNN, and the Sobel operator adaptively controls the threshold value of the PCNN, specifically as follows:
where H (i, j) is a high frequency subband coefficient.
The step two is to fuse the high frequency coefficient to obtain the tetrol coefficient corresponding to the large ignition frequency, and when n=n, the iteration is stopped, and the initial value is obtained asAnd->Obtaining the fused high-frequency subband coefficient y F The method comprises the following steps:
wherein, the ignition times of the high frequency coefficient is:y F (i,j),y I (i,j),y V (i, j) represents the fusion coefficient, infrared coefficient, and visible light coefficient at the position (i, j), respectively.
In the third step, the saturation channel image is adaptively stretched, which comprises the following steps:
wherein I' S For the saturation component after the stretching process, max is the pixel maximum value of the saturation component, and Min is the pixel minimum value of the saturation component.
In the fourth step, the fusion rule is to compare the difference value of the standard deviation of the local area of the infrared image and the visible light image with a threshold value, wherein the former is large, and the latter is large, and the average value of the two image coefficients is taken, so that the selection of the threshold value th is important, and the selection is mainly performed through experience, and the value th is usually between 0.1 and 0.3, and specifically as follows:
wherein F is L,F Representing the low frequency components after the fusion,representing the brightness low-frequency component of the processed visible light image,/->Representing the processed near infrared image low frequency component, sigma Vi,I -σ In And the difference value of the mean square error of the brightness low-frequency component of the visible light image and the near infrared image low-frequency component is represented.
Compared with the prior art, the invention has the following advantages:
1. the invention provides a near infrared and color visible light image fusion algorithm based on Tetrolet transformation, which is based on a Tetrolet transformation and a self-adaptive pulse coupling neural network to carry out image decomposition on a near infrared image and a color visible light image on the basis of an HSI color space, and respectively processes high and low frequency components and then carries out fusion and saturation stretching, so that an image which is clear in detail, full in color and directly observed by human eyes is obtained; the color contrast of the fusion image obtained by the method is obviously improved, and the fusion image has obvious advantages in objective evaluation indexes such as image saturation, color recovery performance, structural similarity, contrast and the like.
2. The invention transfers the RGB image into the HSI space, and the color information is ensured not to be distorted by the independent processing of the three channels of H, S, I and the separate processing of the brightness component, the chromaticity component and the saturation component.
3. The invention improves the decomposition frame of the Tetrolet transformation, so that the decomposed high-low frequency coefficient is easier to process, and the quality of the fused image is greatly improved.
4. According to the invention, the saturation channel image is adaptively and non-linearly stretched, so that the saturation under different conditions can be adaptively stretched to an optimal effect, and the contrast ratio is improved.
Drawings
FIG. 1 is a schematic flow chart of an algorithm of the present invention;
FIG. 2 is a graph showing the comparison of the effects of image fusion with other algorithms using the algorithm of the present invention;
FIG. 3 is a second comparison of the effects of image fusion with other algorithms using the algorithm of the present invention;
in fig. 2: fig. a1 is a first set of visible light original images, fig. b1 is a first set of near infrared original images, fig. c1 is an image obtained by fusing fig. a1 and fig. b1 by using a DWT method, fig. d1 is an image obtained by fusing fig. a1 and fig. b1 by using a NSCT-PCNN method, fig. e1 is an image obtained by fusing fig. a1 and fig. b1 by using a tetrelet-PCNN method, and f1 is an image obtained by fusing fig. a1 and fig. b1 by using the method of the present invention;
in fig. 3: fig. a2 is a second set of visible light original images, fig. b2 is a second set of near infrared original images, fig. c2 is an image obtained by fusing fig. a2 and fig. b2 by a DWT method, fig. d2 is an image obtained by fusing fig. a2 and fig. b2 by a NSCT-PCNN method, fig. e2 is an image obtained by fusing fig. a2 and fig. b2 by a tetrelet-PCNN method, and f2 is an image obtained by fusing fig. a2 and fig. b2 by the method of the present invention.
Detailed Description
The technical scheme of the patent is further described in detail below with reference to the specific embodiments.
Embodiments of the present patent are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present patent and are not to be construed as limiting the present patent.
Referring to fig. 1, the present embodiment provides a near infrared and color visible light image fusion algorithm based on tetrelet transformation, which includes the following steps:
step one: converting visible light images from RGB space to HSI space to obtain chromaticity I respectively H Component, saturation I S Component and brightnessDegree component I b ;
Step two: processing image low frequency subband coefficients for luminance component I b And infrared image I i Respectively performing tetrol transformation to obtain corresponding low-frequency coefficientsAnd->And high frequency coefficient->And->For low frequency coefficient->And->Fusing by adopting an expected maximum algorithm to obtain fused low-frequency coefficient +.>Processing image high frequency subband coefficients, for high frequency coefficients +.>And->Adopting improved self-adaptive PCNN to fuse to obtain fused high-frequency coefficient +.>For->And->Performing a Tetrolet inverse transformation to obtain a fused brightness image I f ;
Step three: for saturation I S Nonlinear stretching is carried out on the components to obtain a stretched saturation component I' S ;
Step four: with I H 、I' S 、I f Replace the original chromaticity I H Component, saturation I S Component and luminance component I b And then reversely mapping to the RGB space to obtain a final fusion image.
In the step 1, the visible light image is converted from the RGB space to the HSI space by adopting a standard model method, and the specific formula is as follows:
S=Max-Min
the R, G, B components in the formula are normalized data, max represents the maximum value of (R, G, B), min represents the minimum value of (R, G, B), and H, S, I represents the converted chromaticity, saturation and brightness respectively; if H < 0, it is increased by 2π.
In the tetrelet transformation, the maximum value of the first-order norm is adopted for filtering to replace the minimum value of the original first-order norm for filtering, and a selection formula is as follows:
wherein G is d,(c),z Representing high frequency coefficients, S representing low frequency coefficients, c representing corresponding tetrelet splitting blocks.
Processing the low-frequency subband coefficient of the image in the second step, and for the low-frequency coefficientAnd->Adopting an expected maximum algorithm to perform fusion, and based on low-frequency coefficient fusion of the expected maximum algorithm, searching potential distribution maximum likelihood estimation from a given incomplete data set, and applying the expected maximum algorithm to fusion of low-frequency coefficient images; assume that K low frequency images I are to be fused k K.epsilon. {1,2, …, K } comes from an unknown image F, indicating that the dataset is incomplete, I k Is:
I k (i,j)=α k (i,j)F(l)+ε k (i,j)
wherein alpha is k (i, j) ∈ { -1,0,1} is the sensor selectivity factor, ε k (i, j) is random noise at position (i, j), when the image does not have the same form, sensor selectivity factor alpha is used k :
In the desired maximum algorithm, the local noise ε k (i, j) using a mixture distribution of M Gaussian probability density functions, the formula is as follows:
the fusion step of the low-frequency coefficients in the second step is as follows:
s1, carrying out standardization and normalization processing on image data:
I' k (i,j)=(I k (i,j)-μ)H
wherein I' k And I k Respectively carrying out standard normalization on the image and the original image, wherein mu is the average value of the whole image, and H is the gray level of the image;
s2, setting initial values of all parameters, adopting a method of averaging imaging sensor images, assuming that the fused image is F,
wherein w is k The weight coefficient of the image to be fused;
the overall variance of the pixel neighborhood window l=p×q is:
the initialization variance of the Gaussian mixture model is as follows:
s3, calculating the conditional probability density of the mth term of the mixed Gaussian distribution under the condition of given parameters:
s4, updating the parameter alpha k ,α k The value of (1, 0, 1) is selected to maximize the value of the following formula,
s5, recalculating the conditional probability density distribution g m,k,l Update the real scene F (l):
s6, updating model parameters of noise:
s7, repeating the steps S3 to S6 by using the new parameters, and determining that the fused image is:
in the second step, processing the high-frequency subband coefficient of the image, and fusing the high-frequency coefficient by adopting an improved self-adaptive PCNN, wherein a Sobel operator adaptively controls the threshold value of the PCNN, and the method specifically comprises the following steps:
where H (i, j) is a high frequency subband coefficient.
Based on the high-frequency fusion of the improved self-adaptive PCNN, the number of the ignition times of the PCNN reflects the intensity of the nerve cells stimulated by the outside, and the information of details contained in the sub-band coefficients after the Tetrolet transformation is indicated, so that the Tetrolet coefficient corresponding to the large ignition times is taken; when n=n, the iteration is stopped, and the initial value is taken as And->Obtaining the fused high-frequency subband coefficient y F The method comprises the following steps:
wherein, the ignition times of the high frequency coefficient is:y F (i,j),y I (i,j),y V (i, j) represents the fusion coefficient, infrared coefficient, and visible light coefficient at the position (i, j), respectively. />
The fused brightness image is directly converted into an RGB space, so that the color of the image is light, the contrast is reduced, the color is not prominent, and the distortion is caused, and therefore, the saturation S is subjected to nonlinear stretching, and the purpose of improving the contrast is achieved. In order to adaptively stretch the saturation under different conditions to an optimal effect, the self-adaptive stretching method for the saturation channel image in the third step is specifically as follows:
wherein I' S For the saturation component after the stretching process, max is the pixel maximum value of the saturation component, and Min is the pixel minimum value of the saturation component.
In the fourth step, the fusion rule is to compare the difference value of the local area standard deviation of the infrared image and the visible light image with a threshold value, wherein the former is large and the coefficient of the image with large value is taken, and the latter is large and the average value of the two image coefficients is taken, so that the selection of the threshold value th is important, and the current selection is mainly carried out through experience, and the value th is usually between 0.1 and 0.3, and the method comprises the following steps of:
wherein F is L,F Representing the low frequency components after the fusion,representing the brightness low-frequency component of the processed visible light image,/->Representing the processed near infrared image low frequency component, sigma Vi,I -σ In And the difference value of the mean square error of the brightness low-frequency component of the visible light image and the near infrared image low-frequency component is represented.
The invention provides a near infrared and color visible light image fusion algorithm based on Tetrolet transformation, which is used for converting a color visible light image into an HSI color space in order to ensure that color information is not distorted, and separating and processing a brightness component, a chrominance component and a saturation component by means of uncorrelation among three channels H, S, I, so that an RGB image is firstly transferred into the HSI space. Meanwhile, the decomposition frame of the tetrol conversion is improved, the template is selected by adopting the maximum value of the first-order norm, the problem that the original tetrol conversion reduces the value range of the high-frequency coefficient is solved, more contour information is contained in the decomposed high-frequency component, the decomposed high-frequency coefficient and low-frequency coefficient are easier to process, and the quality of the fused image is greatly improved.
Searching potential distribution maximum likelihood estimation from a given incomplete data set, and providing a low-frequency component fusion rule based on a maximum expectation algorithm; in order to better keep the detailed information of the fusion image, a new PCNN network model is adopted as a fusion rule when high-frequency components are fused, a coefficient corresponding to a neuron with the largest ignition frequency is selected as the high-frequency component, and a Gaussian difference operator is utilized to adaptively control the threshold value of the PCNN; taking a fused image obtained by carrying out the Tetrolet inverse transformation on the processed low-frequency and high-frequency components as a new brightness component; to improve the contrast of the resulting image, the saturation component is stretched non-linearly. And finally, mapping the processed brightness component, saturation component and original chromaticity component to RGB space to complete fusion. The fusion image obtained by the method has clear details, full color and obviously improved color contrast.
And (3) effect verification: in order to verify the effect of the fusion algorithm, three common transform domain fusion methods are selected for comparison with the method of the invention, wherein the existing fusion methods are respectively a discrete wavelet decomposition method (DWT) in which a low-frequency component adopts a mean value to fuse a high-frequency component and adopts a region energy maximum fusion rule, a non-downsampled shear wave decomposition method (NSCT-PCNN) in which a low-frequency component adopts a PCNN with a desired maximum high-frequency component and adopts a fixed threshold value, and a tetrol decomposition method (tetrol-PCNN) in which the low-frequency component adopts a PCNN with a mean value and a high-frequency component and adopts a fixed threshold value. Wherein the number of the DWT decomposition layers is 4; the number of decomposition layers of NSCT-PCNN is 4, and the decomposition directions are 4, 8 and 16 respectively; in the Tetrolet-PCNN method, the number of decomposition layers is set to 4, the connection strength is set to 0.118, the input attenuation coefficient is set to 0.145, and the connection amplitude is set to 135.5; in the method of the invention, the decomposition layer number of the tetrol conversion is 4.
And selecting two groups of images with the resolution ratio of 1024 multiplied by 680 for fusion comparison, and comparing the experimental results subjectively and objectively.
Subjective pairs such as those shown in fig. 2 and 3, fig. 2 and 3 are respectively two sets of effect comparison graphs for performing image fusion by adopting the algorithm of the invention and other algorithms.
In fig. 2, fig. a1 is a first set of visible light original images, fig. b1 is a first set of near infrared original images, fig. c1 is an image obtained by fusing fig. a1 and fig. b1 by using a DWT method, fig. d1 is an image obtained by fusing fig. a1 and fig. b1 by using a NSCT-PCNN method, fig. e1 is an image obtained by fusing fig. a1 and fig. b1 by using a Tetrolet-PCNN method, and f1 is an image obtained by fusing fig. a1 and fig. b1 by using the method of the present invention.
In fig. 3, fig. a2 is a second set of visible light original images, fig. b2 is a second set of near infrared original images, fig. c2 is an image obtained by fusing fig. a2 and fig. b2 by a DWT method, fig. d2 is an image obtained by fusing fig. a2 and fig. b2 by a NSCT-PCNN method, fig. e2 is an image obtained by fusing fig. a2 and fig. b2 by a Tetrolet-PCNN method, and f2 is an image obtained by fusing fig. a2 and fig. b2 by the method of the present invention.
As can be seen from the fusion results in fig. 2 and 3, the DWT fusion method has a blurred edge and the fusion quality is the worst; the NSCT-PCNN method can extract space detail information in a source image, but the edges of a person region in a scene are blurred; the Tetrolet-PCNN method has better contours and boundaries, but the color contrast has a gap compared with the method of the present invention; as can be seen from the comparison, the method can better retain the space detail information and the target edge information, the edge and the house texture detail are the clearest, the color contrast is more suitable for human eyes to feel, and the comprehensive effect is better.
Objective comparison: and selecting four evaluation indexes of an image information saturation index QMI, a blind evaluation index sigma, an image structure similarity evaluation index SSIM and an image contrast gain Cg to objectively evaluate all fusion results. QMI is used to measure the retention of original information of the source image in the final fusion image, and the larger the value of QMI indicates the more retained information, the better the effect; the blind evaluation index sigma is used for evaluating the color recovery performance of the fusion algorithm, and the smaller the sigma is, the better the effect of the fusion algorithm is; structural similarity SSIM ranges from 0 to 1, and when the images are identical, the SSIM value is 1; the image contrast gain Cg represents the average contrast difference between the fused image and the original image, and can more intuitively represent the difference in image contrast. Tables 1 and 2 present the data presented for two sets of images fused using the method of the present invention and the three methods described above.
TABLE 1 first group of image objective evaluation indices
Q MI | s | SSIM | Cg | |
DWT | 0.6255 | 0.0062 | 0.6447 | 0.6326 |
NSCT+PCNN | 0.5009 | 0.0481 | 0.6780 | 0.6238 |
Tetrolet+PCNN | 0.7018 | 0.0015 | 0.6092 | 0.7486 |
Methods herein | 0.8681 | 0.0001 | 0.5223 | 0.8467 |
TABLE 2 objective evaluation index for second group of images
Q MI | s | SSIM | Cg | |
DWT | 0.6311 | 0.0047 | 0.7457 | 0.5392 |
NSCT+PCNN | 0.5179 | 0.0343 | 0.7744 | 0.6127 |
Tetrolet+PCNN | 0.7144 | 0.0010 | 0.7473 | 0.7586 |
Methods herein | 0.8740 | 0.0005 | 0.6645 | 0.8361 |
As can be seen from the data in tables 1 and 2, compared with the conventional method, the method of the invention has the advantages that the information saturation index QMI value of the fused image is maximum, which indicates that the reserved information is the most and the effect is the best; the sigma value of the blind evaluation index is minimum, and the effect of the fusion algorithm is best; the minimum SSIM value of the structural similarity and the maximum image contrast gain Cg indicate that the color contrast of the image fused by the method is the highest.
In conclusion, compared with the traditional method, the method provided by the invention has obvious advantages in objective evaluation indexes such as image saturation, color recovery performance, structural similarity, contrast and the like.
While the preferred embodiments of the present patent have been described in detail, the present patent is not limited to the above embodiments, and various changes may be made without departing from the spirit of the present patent within the knowledge of those skilled in the art.
Claims (2)
1. The near infrared and color visible light image fusion algorithm based on the tetrol transformation is characterized by comprising the following steps of:
step one: converting visible light images from RGB space to HSI space to obtain chromaticity I respectively H Component, saturation I S Component and luminance component I b ;
Step two: processing image low frequency subband coefficients for luminance component I b And infrared image I i Respectively performing tetrol transformation to obtain corresponding low-frequency coefficientsAnd T is i l And high frequency coefficient->And T is i h For low frequency coefficient->And T is i l Fusion is carried out by adopting an expected maximum algorithm to obtain fusionLow frequency coefficient->Processing image high frequency subband coefficients, for high frequency coefficients +.>And T is i h Adopting improved self-adaptive PCNN to fuse to obtain fused high-frequency coefficient +.>For->And->Performing a Tetrolet inverse transformation to obtain a fused brightness image I f ;
Step three: for saturation I S Nonlinear stretching is carried out on the components to obtain a stretched saturation component I' S ;
Step four: with I H 、I' S 、I f Replace the original chromaticity I H Component, saturation I S Component and luminance component I b Then reversely mapping to RGB space to obtain final fusion image;
in the second step, in the Tetrolet transformation, the maximum value of the first-order norm is adopted for filtering to replace the minimum value of the original first-order norm for filtering, and a selection formula is as follows:
wherein G is d,(c),z Representing high frequency coefficients, S representing low frequency coefficients, c representing corresponding tetrelet-splitting blocks;
processing the low-frequency subband coefficients of the image in the second step, and for the low-frequency coefficientsAnd T is i l Adopting an expected maximum algorithm to perform fusion, and based on low-frequency coefficient fusion of the expected maximum algorithm, searching potential distribution maximum likelihood estimation from a given incomplete data set, and applying the expected maximum algorithm to fusion of low-frequency coefficient images; k low-frequency images I to be fused k K e {1,2, …, K } is from an unknown image F, I k Is:
I k (i,j)=α k (i,j)F(l)+ε k (i,j)
wherein alpha is k (i, j) ∈ { -1,0,1} is the sensor selectivity factor, ε k (i, j) is random noise at position (i, j), when the image does not have the same form, sensor selectivity factor alpha is used k :
In the desired maximum algorithm, the local noise ε k (i, j) using a mixture distribution of M Gaussian probability density functions, the formula is as follows:
the fusion step of the low-frequency coefficients in the second step is as follows:
s1, carrying out standardization and normalization processing on image data:
I' k (i,j)=(I k (i,j)-μ)H
wherein I' k And I k Respectively carrying out standard normalization on the image and the original image, wherein mu is the average value of the whole image, and H is the gray level of the image;
s2, setting initial values of all parameters, adopting a method of averaging imaging sensor images, assuming that the fused image is F,
wherein w is k The weight coefficient of the image to be fused;
the overall variance of the pixel neighborhood window l=p×q is:
the initialization variance of the Gaussian mixture model is as follows:
s3, calculating the conditional probability density of the mth term of the mixed Gaussian distribution under the condition of given parameters:
s4, updating the parameter alpha k ,α k The value of (1, 0, 1) is selected to maximize the value of the following formula,
s5, recalculating the conditional probability density distribution g m,k,l Update the real scene F (l):
s6, updating model parameters of noise:
s7, repeating the steps S3 to S6 by using the new parameters, and determining that the fused image is:
in the second step, the high-frequency subband coefficients of the image are processed, the high-frequency coefficients are fused by adopting improved self-adaptive PCNN, and the Sobel operator adaptively controls the threshold value of the PCNN, specifically as follows:
wherein H (i, j) is a high frequency subband coefficient;
the step two is to fuse the high frequency coefficient to obtain the tetrol coefficient corresponding to the large ignition frequency, and when n=n, the iteration is stopped, and the initial value is obtained asAnd->Obtaining the fused high-frequency subband coefficient y F The method comprises the following steps:
wherein, the ignition times of the high frequency coefficient is:y F (i,j),y I (i,j),y V (i, j) represents the fusion coefficient, the infrared coefficient, and the visible light coefficient at the position (i, j), respectively;
in the third step, the saturation channel image is adaptively stretched, which comprises the following steps:
wherein I' S For the saturation component after the stretching treatment, max is the pixel maximum value of the saturation component, min is the pixel minimum value of the saturation component;
in the fourth step, the fusion rule is to compare the difference value of the standard deviation of the local area of the infrared image and the visible light image with a threshold value, wherein the former is larger than the former, and the latter is larger than the latter, and the average value of the two image coefficients is specifically as follows:
wherein F is L,F Representing the low frequency components after the fusion,represents the brightness low-frequency component of the processed visible light image,representing the processed near infrared image low frequency component, sigma Vi,I -σ In Representing the difference value of the mean square error of the brightness low-frequency component of the visible light image and the near infrared image low-frequency component; the th value is between 0.1 and 0.3.
2. The algorithm for fusing near infrared and color visible light images based on Tetrolet transformation according to claim 1, wherein the conversion of the visible light image from RGB space to HSI space in step 1 adopts a standard model method, and the specific formula is as follows:
S=Max-Min
the R, G, B components in the formula are normalized data, max represents the maximum value of (R, G, B), min represents the minimum value of (R, G, B), and H, S, I represents the converted chromaticity, saturation and brightness respectively; if H < 0, it is increased by 2π.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911280623.8A CN111080568B (en) | 2019-12-13 | 2019-12-13 | Near infrared and color visible light image fusion algorithm based on Tetrolet transformation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911280623.8A CN111080568B (en) | 2019-12-13 | 2019-12-13 | Near infrared and color visible light image fusion algorithm based on Tetrolet transformation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111080568A CN111080568A (en) | 2020-04-28 |
CN111080568B true CN111080568B (en) | 2023-05-26 |
Family
ID=70314281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911280623.8A Active CN111080568B (en) | 2019-12-13 | 2019-12-13 | Near infrared and color visible light image fusion algorithm based on Tetrolet transformation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111080568B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021217642A1 (en) * | 2020-04-30 | 2021-11-04 | 深圳市大疆创新科技有限公司 | Infrared image processing method and apparatus, and movable platform |
CN112837254B (en) * | 2021-02-25 | 2024-06-11 | 普联技术有限公司 | Image fusion method and device, terminal equipment and storage medium |
US12002294B2 (en) | 2021-03-04 | 2024-06-04 | Black Sesame Technologies Inc. | RGB-NIR dual camera face anti-spoofing method |
CN113542595B (en) * | 2021-06-28 | 2023-04-18 | 北京沧沐科技有限公司 | Capturing and monitoring method and system based on day and night images |
CN113724164B (en) * | 2021-08-31 | 2024-05-14 | 南京邮电大学 | Visible light image noise removing method based on fusion reconstruction guidance filtering |
CN114331937B (en) * | 2021-12-27 | 2022-10-25 | 哈尔滨工业大学 | Multi-source image fusion method based on feedback iterative adjustment under low illumination condition |
CN114663311A (en) * | 2022-03-24 | 2022-06-24 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN116844116B (en) * | 2023-09-01 | 2023-12-05 | 山东乐普矿用设备股份有限公司 | Underground comprehensive safety monitoring system based on illumination control system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4734776A (en) * | 1986-08-15 | 1988-03-29 | General Electric Company | Readout circuit for an optical sensing charge injection device facilitating an extended dynamic range |
CN102063710A (en) * | 2009-11-13 | 2011-05-18 | 烟台海岸带可持续发展研究所 | Method for realizing fusion and enhancement of remote sensing image |
CN103745470A (en) * | 2014-01-08 | 2014-04-23 | 兰州交通大学 | Wavelet-based interactive segmentation method for polygonal outline evolution medical CT (computed tomography) image |
CN108898569A (en) * | 2018-05-31 | 2018-11-27 | 安徽大学 | A kind of fusion method being directed to visible light and infrared remote sensing image and its fusion results evaluation method |
CN109614996A (en) * | 2018-11-28 | 2019-04-12 | 桂林电子科技大学 | The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image |
CN109658371A (en) * | 2018-12-05 | 2019-04-19 | 北京林业大学 | The fusion method of infrared image and visible images, system and relevant device |
CN110111292A (en) * | 2019-04-30 | 2019-08-09 | 淮阴师范学院 | A kind of infrared and visible light image fusion method |
CN110335225A (en) * | 2019-07-10 | 2019-10-15 | 四川长虹电子系统有限公司 | The method of infrared light image and visual image fusion |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9008457B2 (en) * | 2010-05-31 | 2015-04-14 | Pesonify, Inc. | Systems and methods for illumination correction of an image |
CN105338262B (en) * | 2015-10-09 | 2018-09-21 | 浙江大华技术股份有限公司 | A kind of graphic images processing method and processing device |
-
2019
- 2019-12-13 CN CN201911280623.8A patent/CN111080568B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4734776A (en) * | 1986-08-15 | 1988-03-29 | General Electric Company | Readout circuit for an optical sensing charge injection device facilitating an extended dynamic range |
CN102063710A (en) * | 2009-11-13 | 2011-05-18 | 烟台海岸带可持续发展研究所 | Method for realizing fusion and enhancement of remote sensing image |
CN103745470A (en) * | 2014-01-08 | 2014-04-23 | 兰州交通大学 | Wavelet-based interactive segmentation method for polygonal outline evolution medical CT (computed tomography) image |
CN108898569A (en) * | 2018-05-31 | 2018-11-27 | 安徽大学 | A kind of fusion method being directed to visible light and infrared remote sensing image and its fusion results evaluation method |
CN109614996A (en) * | 2018-11-28 | 2019-04-12 | 桂林电子科技大学 | The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image |
CN109658371A (en) * | 2018-12-05 | 2019-04-19 | 北京林业大学 | The fusion method of infrared image and visible images, system and relevant device |
CN110111292A (en) * | 2019-04-30 | 2019-08-09 | 淮阴师范学院 | A kind of infrared and visible light image fusion method |
CN110335225A (en) * | 2019-07-10 | 2019-10-15 | 四川长虹电子系统有限公司 | The method of infrared light image and visual image fusion |
Also Published As
Publication number | Publication date |
---|---|
CN111080568A (en) | 2020-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111080568B (en) | Near infrared and color visible light image fusion algorithm based on Tetrolet transformation | |
CN108876735B (en) | Real image blind denoising method based on depth residual error network | |
CN108876737B (en) | Image denoising method combining residual learning and structural similarity | |
Chang et al. | Perceptual image quality assessment by independent feature detector | |
CN104182947B (en) | Low-illumination image enhancement method and system | |
CN105551010A (en) | Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network) | |
CN111161360B (en) | Image defogging method of end-to-end network based on Retinex theory | |
CN112700389B (en) | Active sludge microorganism color microscopic image denoising method | |
CN113837974B (en) | NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm | |
CN111968041A (en) | Self-adaptive image enhancement method | |
CN106991661B (en) | Non-local mean denoising method fusing KL (karhunen-Loeve) transformation and grey correlation degree | |
CN114092353B (en) | Infrared image enhancement method based on weighted guide filtering | |
CN111179208B (en) | Infrared-visible light image fusion method based on saliency map and convolutional neural network | |
CN104463804A (en) | Image enhancement method based on intuitional fuzzy set | |
CN107292316B (en) | Sparse representation-based method for improving image definition | |
CN109272477A (en) | A kind of fusion method and fusion treatment device based on NSST Yu adaptive binary channels PCNN | |
CN114897882B (en) | Remote sensing image fusion method based on weighted average curvature filter decomposition | |
CN112184646A (en) | Image fusion method based on gradient domain oriented filtering and improved PCNN | |
Feng et al. | Low-light color image enhancement based on Retinex | |
Wu et al. | Low-Light image enhancement algorithm based on HSI color space | |
CN106803236A (en) | Asymmetric correction method based on fuzzy field singular value decomposition | |
CN110969590B (en) | Image enhancement algorithm based on CA-SPCNN | |
CN113177904B (en) | Image fusion method and system | |
CN113947535A (en) | Low-illumination image enhancement method based on illumination component optimization | |
Hsu et al. | Region-based image fusion with artificial neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |