CN111080568A - Tetrolet transform-based near-infrared and color visible light image fusion algorithm - Google Patents
Tetrolet transform-based near-infrared and color visible light image fusion algorithm Download PDFInfo
- Publication number
- CN111080568A CN111080568A CN201911280623.8A CN201911280623A CN111080568A CN 111080568 A CN111080568 A CN 111080568A CN 201911280623 A CN201911280623 A CN 201911280623A CN 111080568 A CN111080568 A CN 111080568A
- Authority
- CN
- China
- Prior art keywords
- image
- frequency
- tetrolet
- visible light
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 claims abstract description 61
- 230000009466 transformation Effects 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 15
- 238000013507 mapping Methods 0.000 claims abstract description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 16
- 239000000203 mixture Substances 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000013101 initial test Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 12
- 238000011156 evaluation Methods 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000007500 overflow downdraw method Methods 0.000 description 4
- 238000007476 Maximum Likelihood Methods 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000005477 standard model Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention provides a Tetrolet transform-based near-infrared and color visible light image fusion algorithm, belongs to the technical field of image processing, and is used for solving the problems of low contrast and unclear details after fusion of near-infrared and color visible light images. Firstly, converting a color visible light image into an HSI space, and respectively carrying out Tetrolet transformation on a brightness component and an infrared image to obtain low-frequency and high-frequency subband coefficients; secondly, providing a low-frequency coefficient fusion rule with the largest expectation for the low-frequency subband coefficients, and providing a self-adaptive PCNN model as a fusion rule for the high-frequency subband coefficients; obtaining a fused brightness image through Tetrolet inverse transformation; then, a saturation component stretching method is provided; and finally, reversely mapping each processed component to an RGB space to complete image fusion. The fused image obtained by the method has clear details, full color and obviously improved color contrast.
Description
Technical Field
The invention belongs to the technical field of image processing, relates to a near-infrared and color visible light image fusion algorithm, and particularly relates to a near-infrared and color visible light image fusion algorithm based on Tetrolet transformation.
Background
The image fusion is to fuse a plurality of source images acquired by a plurality of sensors together, and the fused image contains all important features of the source images. The uncertainty of the image information is effectively reduced, the aim of enhancing the image information is fulfilled, and the content of the image information is expanded. The fused image has all characteristic information of the source image, and is more suitable for subsequent recognition processing and research. The infrared and visible light image fusion technology can combine the thermal radiation target information in the infrared image and the scene information in the visible light image, so that the research has important significance in the military and civil fields. In the process of image fusion, the problems of low contrast and unclear details exist after the near infrared and color visible light images are fused.
Many scholars at home and abroad research image fusion algorithms, in 2010, Jens Krommweh proposes Tetrolet transformation, which is a sparse image representation method developed by self-adaptive Haar wavelet transformation, has a good directional structure, can express high-dimensional texture features of images, has high sparsity, and is more suitable for being used as a fusion framework in image fusion. Nemalidinned proposes a PCNN-based infrared and visible light and image fusion method, wherein low-frequency components are fused by adopting a Pulse Coupled Neural Network (PCNN), and the neural network is subjected to Laplacian excitation by correction so as to keep the maximum available information in two source images. The high-frequency component adopts a local logarithm Gabor fusion rule based on energy, and a good fusion effect is obtained. Cheng provides a new infrared and visible light image fusion framework based on a self-adaptive dual-channel unit, a pulse coupling neural network and singular value decomposition (ADS-PCNN) are applied to image fusion, image average gradients (IAVG) of high-frequency components and low-frequency components are used for respectively stimulating the ADS-PCNN, and the problems that spectral differences between infrared and visible light images are large and black artifacts are prone to appearing in fused images are solved. And (3) taking a local structure information operator (LSI) as the self-adaptive connection strength for enhancing the fusion precision, performing local singular value decomposition on each source image, and determining the iteration times in a self-adaptive manner.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides a near-infrared and color visible light image fusion algorithm based on Tetrolet transformation, which aims to solve the problems that: low contrast and unclear details after the near infrared and color visible light images are fused.
The purpose of the invention can be realized by the following technical scheme:
a near infrared and color visible light image fusion algorithm based on Tetrolet transformation comprises the following steps:
the method comprises the following steps: converting the visible light image from RGB space to HSI space to obtain chromaticity IHComponent, saturation ISComponent and luminance component Ib;
Step two: processing low-frequency subband coefficients of an image for a luminance component IbAnd an infrared image IiRespectively performing Tetrolet conversion to obtain corresponding low-frequency coefficientsAnd Ti lAnd high frequency coefficientAnd Ti hFor low frequency coefficientAnd Ti lFusing by adopting an expected maximum algorithm to obtain fused low-frequency coefficientsProcessing high-frequency subband coefficients of an image, for high-frequency coefficientsAnd Ti hAdopting improved self-adaptive PCNN to carry out fusion to obtain a fused high-frequency coefficientTo pairAndperforming Tetrolet inverse transformation to obtain a fused brightness image If;
Step three: for degree of saturation ISThe component is subjected to nonlinear stretching to obtain a stretched saturation component I'S;
Step four: by means of IH、I'S、IfReplacing the original chromaticity IHComponent, saturation ISComponent and luminance component IbAnd then reversely mapping to an RGB space to obtain a final fusion image.
The working principle of the invention is as follows: firstly, converting a color visible light image into an HSI space, and respectively carrying out Tetrolet transformation on a brightness I component and an infrared image to obtain low-frequency and high-frequency subband coefficients; secondly, aiming at the low-frequency sub-band coefficient of the image, providing a low-frequency coefficient fusion rule with the largest expectation, aiming at the high-frequency sub-band coefficient of the image, adopting a Sobel operator to adjust the threshold value of a PCNN model, and providing a self-adaptive PCNN model as the fusion rule; obtaining a fused brightness image through Tetrolet inverse transformation; then, aiming at the problem of the reduction of the saturation of the fused image, a saturation component stretching method is provided; and finally, reversely mapping each processed component to an RGB space to complete image fusion, wherein the fused image obtained by the method has clear details, full colors and obviously improved color contrast.
In the step 1, a standard model method is adopted for converting the visible light image from the RGB space to the HSI space, and the specific formula is as follows:
H=H+2π,if H<0
S=Max-Min
the R, G, B components in the formula are normalized data, Max represents the (R, G, B) maximum value, Min represents the (R, G, B) minimum value, and H, S, I represents the converted chromaticity, saturation, and luminance, respectively.
In the step two, in the Tetrolet transformation, the maximum value of the first-order norm is adopted for filtering to replace the minimum value of the original first-order norm for filtering, and the selection formula is as follows:
wherein G isd,(c),zRepresenting high frequency coefficients, S low frequency coefficients, c represents the corresponding Tetrolet decomposition block.
Processing the low-frequency subband coefficient of the image in the second step, and comparing the low-frequency coefficientAnd Ti lFusing by adopting an expected maximum algorithm, fusing low-frequency coefficients based on the expected maximum algorithm, and applying the expected maximum algorithm to the fusion of the low-frequency coefficient images by searching the potential distribution maximum likelihood estimation from the given incomplete data set; suppose K low-frequency images I to be fusedkK ∈ {1,2, ·, K } from an unknown image F, indicating that the dataset is incomplete, IkOne common model of (a) is:
Ik(i,j)=αk(i,j)F(l)+εk(i,j)
wherein, αk(i, j) ∈ { -1,0,1} is the sensor selectivity factor, εk(i, j) is random noise at location (i, j), and when the image does not have the same morphology, the sensor selectivity factor α is usedk:
In the expectation maximization algorithm, the local noise epsilon is treatedk(i, j) modeling Using a mixture distribution of M Gaussian probability density functionsThe formula is as follows:
and the low-frequency coefficient in the second step is fused as follows:
and S1, normalizing and normalizing the image data:
I'k(i,j)=(Ik(i,j)-μ)H
wherein, I'kAnd IkRespectively obtaining an image and an original image after standard normalization, wherein mu is the mean value of the whole image, and H is the gray level of the image;
s2, setting the initial value of each parameter, adopting the method of average imaging sensor image, assuming the fused image as F,
wherein, wkThe weight coefficient of the image to be fused;
the overall variance of the pixel neighborhood window L ═ p × q is:
the initialized variance of the Gaussian mixture model is:
s3, calculating the conditional probability density of the m-th term of the Gaussian mixture distribution under the condition of given parameters:
s4, updating parameters αk,αkIs selected among { -1,0,1} so as to maximize the value of the following formula,
s5, recalculating the conditional probability density distribution gm,k,lUpdate of the real scene f (l):
and S6, updating model parameters of the noise:
s7, repeating the steps S3 to S6 by using the new parameters, and when the parameters converge to a certain specific range, determining the fused image as:
processing the high-frequency subband coefficients of the image in the second step, fusing the high-frequency coefficients by adopting improved self-adaptive PCNN, and adaptively controlling the threshold value of the PCNN by a Sobel operator, wherein the method specifically comprises the following steps:
where H (i, j) is the high frequency subband coefficient.
And step two, the high-frequency coefficients in the step two are fused to obtain a Tetrolet coefficient corresponding to the larger ignition frequency, when N is equal to N, the iteration is stopped, and an initial test value is taken asAnd in formula (25)Obtaining the fused high-frequency sub-band coefficient yFComprises the following steps:
the ignition frequency of the high-frequency coefficient is as follows:yF(i,j),yI(i,j),yV(i, j) represents the fusion coefficient, infrared coefficient, and visible light coefficient at position (i, j), respectively.
The method for adaptively stretching the saturation channel image in the third step specifically comprises the following steps:
wherein, I'SMax is the maximum value of the pixel of the saturation component, and Min is the minimum value of the pixel of the saturation component.
The fusion rule in the fourth step is to compare the difference between the standard variances of the local areas of the infrared image and the visible light image with a threshold, wherein the former is a large coefficient of the image, and the latter is a large average value of the two image coefficients, so that the selection of the threshold th is very important, the threshold th is selected mainly through experience at present, and the value of the th is usually 0.1 to 0.3, specifically as follows:
wherein, FL,FRepresents the low-frequency component after the fusion,representing the luminance low frequency components of the processed visible light image,representing the low-frequency component, σ, of the processed near-infrared imageVi,I-σInAnd the difference value of the mean square error of the brightness low-frequency component of the visible light image and the near infrared image low-frequency component is represented.
Compared with the prior art, the invention has the following advantages:
1. the invention provides a near-infrared and color visible light image fusion algorithm based on Tetrolet transformation, which is used for carrying out image decomposition on a near-infrared image and a color visible light image on the basis of HSI (hue, saturation and lightness) color space based on Tetrolet transformation and a self-adaptive pulse coupling neural network, respectively processing high and low frequency components, then carrying out fusion and saturation stretching, and obtaining an image which is clear in detail, full in color and capable of being directly observed by human vision; the color contrast of the fused image obtained by the method is obviously improved, and the fused image has obvious advantages in objective evaluation indexes such as image saturation, color recovery performance, structural similarity and contrast.
2. The invention transfers the RGB image into HSI space, and separately processes the brightness component, the chroma component and the saturation component by means of the irrelevance among H, S, I three channels, thereby ensuring that the color information is not distorted.
3. The invention improves the decomposition frame of Tetrolet transformation, so that the decomposed high and low frequency coefficients are easier to process, and the quality of the fused image is greatly improved.
4. According to the invention, the saturation channel image is subjected to self-adaptive nonlinear stretching, so that the saturation under different scenes can be adaptively stretched to the optimal effect, and the contrast is improved.
Drawings
FIG. 1 is a schematic flow chart of the algorithm of the present invention;
FIG. 2 is a comparison graph I of the effect of image fusion by the algorithm of the present invention and other algorithms;
FIG. 3 is a comparison graph II of the effect of image fusion by the algorithm of the present invention and other algorithms;
in fig. 2: fig. a1 is a first set of visible light original images, fig. b1 is a first set of near-infrared original images, fig. c1 is an image obtained by fusing fig. a1 and b1 by the DWT method, fig. d1 is an image obtained by fusing fig. a1 and b1 by the NSCT-PCNN method, fig. e1 is an image obtained by fusing fig. a1 and b1 by the Tetrolet-PCNN method, and f1 is an image obtained by fusing fig. a1 and b1 by the method of the present invention;
in fig. 3: fig. a2 is a second set of visible light original images, fig. b2 is a second set of near-infrared original images, fig. c2 is an image obtained by fusing fig. a2 and b2 by the DWT method, fig. d2 is an image obtained by fusing fig. a2 and b2 by the NSCT-PCNN method, fig. e2 is an image obtained by fusing fig. a2 and b2 by the Tetrolet-PCNN method, and f2 is an image obtained by fusing fig. a2 and b2 by the method of the present invention.
Detailed Description
The technical solution of the present patent will be described in further detail with reference to the following embodiments.
Reference will now be made in detail to embodiments of the present patent, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present patent and are not to be construed as limiting the present patent.
Referring to fig. 1, the present embodiment provides a near-infrared and color visible light image fusion algorithm based on a Tetrolet transform, which includes the following steps:
the method comprises the following steps: converting the visible light image from RGB space to HSI space to obtain chromaticity IHComponent, saturation ISComponent and luminance component Ib;
Step two: processing low-frequency subband coefficients of an image for a luminance component IbAnd an infrared image IiRespectively performing Tetrolet conversion to obtain corresponding low-frequency coefficientsAnd Ti lAnd high frequency coefficientAnd Ti hFor low frequency coefficientAnd Ti lFusing by adopting an expected maximum algorithm to obtain fused low-frequency coefficientsProcessing high-frequency subband coefficients of an image, for high-frequency coefficientsAnd Ti hAdopting improved self-adaptive PCNN to carry out fusion to obtain a fused high-frequency coefficientTo pairAndperforming Tetrolet inverse transformation to obtain a fused brightness image If;
Step three: for degree of saturation ISThe component is subjected to nonlinear stretching to obtain a stretched saturation component I'S;
Step four: by means of IH、I'S、IfReplacing the original chromaticity IHComponent, saturation ISComponent and luminance component IbAnd then reversely mapping to an RGB space to obtain a final fusion image.
In the step 1, converting the visible light image from the RGB space to the HSI space by adopting a standard model method, wherein the specific formula is as follows:
H=H+2π,if H<0
S=Max-Min
the R, G, B components in the formula are normalized data, Max represents the (R, G, B) maximum value, Min represents the (R, G, B) minimum value, and H, S, I represents the converted chromaticity, saturation, and luminance, respectively.
In the step two, in the Tetrolet transformation, the maximum value of the first-order norm is adopted for filtering to replace the minimum value of the original first-order norm for filtering, and the selection formula is as follows:
wherein G isd,(c),zRepresenting high frequency coefficients, S low frequency coefficients, c represents the corresponding Tetrolet decomposition block.
Processing the low-frequency subband coefficient of the image in the second step, and performing low-frequency coefficient Tb lAnd Ti lFusing by adopting an expected maximum algorithm, fusing low-frequency coefficients based on the expected maximum algorithm, and applying the expected maximum algorithm to the fusion of the low-frequency coefficient images by searching the potential distribution maximum likelihood estimation from the given incomplete data set; suppose K low-frequency images I to be fusedkK ∈ {1,2, ·, K } from an unknown image F, indicating that the dataset is incomplete, IkOne common model of (a) is:
Ik(i,j)=αk(i,j)F(l)+εk(i,j)
wherein, αk(i, j) ∈ { -1,0,1} is the sensor selectivity factor, εk(i, j) is random noise at location (i, j), and when the image does not have the same morphology, the sensor selectivity factor α is usedk:
In the expectation maximization algorithm, the local noise epsilon is treatedk(i, j) is modeled using a mixture of M Gaussian probability density functions, as follows:
and step two, the fusion of the low-frequency coefficient comprises the following steps:
and S1, normalizing and normalizing the image data:
I'k(i,j)=(Ik(i,j)-μ)H
wherein, I'kAnd IkRespectively obtaining an image and an original image after standard normalization, wherein mu is the mean value of the whole image, and H is the gray level of the image;
s2, setting the initial value of each parameter, adopting the method of average imaging sensor image, assuming the fused image as F,
wherein, wkThe weight coefficient of the image to be fused;
the overall variance of the pixel neighborhood window L ═ p × q is:
the initialized variance of the Gaussian mixture model is:
s3, calculating the conditional probability density of the m-th term of the Gaussian mixture distribution under the condition of given parameters:
s4, updating parameters αk,αkIs selected among { -1,0,1} so as to maximize the value of the following formula,
s5, recalculating the conditional probability density distribution gm,k,lUpdate of the real scene f (l):
and S6, updating model parameters of the noise:
s7, repeating the steps S3 to S6 by using the new parameters, and when the parameters converge to a certain specific range, determining the fused image as:
processing the high-frequency subband coefficients of the image in the second step, fusing the high-frequency coefficients by adopting improved self-adaptive PCNN, and adaptively controlling the threshold value of the PCNN by a Sobel operator, wherein the method specifically comprises the following steps:
where H (i, j) is the high frequency subband coefficient.
In the second step, based on high-frequency fusion of the improved self-adaptive PCNN, the ignition times of the PCNN reflect the strength degree of the neuron stimulated by the outside, and the sub-band coefficient after the Tetrolet transformation contains the detail information, so that the Tetrolet coefficient corresponding to the large ignition times is obtained; when N is equal to N, the iteration is stopped, and the initial test value is taken as And in formula (25)Obtaining the fused high-frequency sub-band coefficient yFComprises the following steps:
the ignition frequency of the high-frequency coefficient is as follows:yF(i,j),yI(i,j),yV(i, j) represents the fusion coefficient, infrared coefficient, and visible light coefficient at position (i, j), respectively.
The brightness image obtained after fusion is directly converted into an RGB space, the color of the image is weak, the contrast is reduced, the color is not prominent, and the distortion is caused, so that the saturation S is subjected to nonlinear stretching, and the purpose of improving the contrast is achieved. In order to enable the saturation under different situations to be adaptively stretched to the optimal effect, the adaptive stretching method for the saturation channel image in the third step specifically comprises the following steps:
wherein, I'SMax is the maximum value of the pixel of the saturation component, and Min is the minimum value of the pixel of the saturation component.
The fusion rule in the fourth step is to compare the difference between the standard variances of the local areas of the infrared image and the visible light image with a threshold, where the former is a large coefficient of the image, and the latter is an average value of the coefficients of the two images, so that the selection of the threshold th is important, and the threshold th is selected mainly through experience at present, and usually the value of th is between 0.1 and 0.3, specifically as follows:
wherein, FL,FRepresents the low-frequency component after the fusion,representing the luminance low frequency components of the processed visible light image,representing the low-frequency component, σ, of the processed near-infrared imageVi,I-σInAnd the difference value of the mean square error of the brightness low-frequency component of the visible light image and the near infrared image low-frequency component is represented.
The invention provides a near-infrared and color visible light image fusion algorithm based on Tetrolet transformation, which is used for converting a color visible light image into an HSI color space in order to ensure that color information is not distorted, and separately processing a brightness component, a chrominance component and a saturation component by means of the irrelevance among H, S, I three channels, so that an RGB image is firstly converted into the HSI space. Meanwhile, a decomposition framework of the Tetrolet transformation is improved, a first-order norm maximum value is adopted for selecting a template, the problem that the value range of a high-frequency coefficient is reduced by the original Tetrolet transformation is solved, more contour information is contained in a decomposed high-frequency component, the decomposed high-frequency and low-frequency coefficients are easier to process, and the quality of a fused image is greatly improved.
Searching a potential distribution maximum likelihood estimation from a given incomplete data set, and providing a low-frequency component fusion rule based on a maximum expectation algorithm; in order to better retain detailed information of a fused image, a new PCNN network model is adopted as a fusion rule during high-frequency component fusion, a coefficient corresponding to a neuron with the largest ignition frequency is selected as a high-frequency component, and a Gaussian difference operator is used for adaptively controlling a PCNN threshold; performing Tetrolet inverse transformation on the processed low-frequency and high-frequency components to obtain a fused image serving as a new brightness component; to improve the contrast of the resulting image, the saturation components are non-linearly stretched. And finally mapping the processed brightness component, saturation component and original chrominance component to an RGB space to complete fusion. The fused image obtained by the method has clear details, full color and obviously improved color contrast.
Effect verification: in order to verify the effect of the fusion algorithm of the invention, three common transform domain fusion methods are selected to be compared with the method of the invention, and the existing fusion methods are respectively a discrete wavelet decomposition method (DWT) in which the low-frequency component adopts a mean value fusion high-frequency component and adopts a region energy maximum fusion rule, a non-subsampled shear wave decomposition method (NSCT-PCNN) in which the low-frequency component adopts PCNN in which an expected maximum high-frequency component adopts a fixed threshold value, and a Tetrolet decomposition method (Tetrolet-PCNN) in which the low-frequency component adopts PCNN in which a mean value high-frequency component adopts a fixed threshold value. Wherein the DWT decomposition layer number is 4; the number of NSCT-PCNN decomposition layers is 4, and the decomposition directions are 4, 8 and 16 respectively; in the Tetrolet-PCNN method, the number of decomposition layers is set to be 4, the connection strength is set to be 0.118, the input attenuation coefficient is set to be 0.145, and the connection amplitude is set to be 135.5; in the method of the invention, the number of decomposition layers of the Tetrolet transform is 4.
And selecting two groups of images with the resolution of 1024 multiplied by 680 for fusion comparison, and comparing the experimental results subjectively and objectively respectively.
Subjective contrast ratio as shown in fig. 2 and fig. 3, fig. 2 and fig. 3 are two groups of images fused by the algorithm of the present invention and other algorithms respectively.
In fig. 2, a view a1 is a first group of visible light original images, a view b1 is a first group of near-infrared original images, a view c1 is a view obtained by fusing the images of a1 and b1 by the DWT method, a view d1 is a view obtained by fusing the images of a1 and b1 by the NSCT-PCNN method, a view e1 is a view obtained by fusing the images of a1 and b1 by the Tetrolet-PCNN method, and a view f1 is a view obtained by fusing the images of a1 and b1 by the method of the present invention.
In fig. 3, a view a2 is a second group of visible light original images, a view b2 is a second group of near-infrared original images, a view c2 is an image obtained by fusing the images a2 and b2 by the DWT method, a view d2 is an image obtained by fusing the images a2 and b2 by the NSCT-PCNN method, a view e2 is an image obtained by fusing the images a2 and b2 by the Tetrolet-PCNN method, and a view f2 is an image obtained by fusing the images a2 and b2 by the method of the present invention.
As can be seen from the fusion results in fig. 2 and fig. 3, the result edge of the DWT fusion method is blurry, and the fusion quality is the worst; the NSCT-PCNN method can extract space detail information in a source image, but the edges of character areas in a scene are fuzzy; the Tetrolet-PCNN method has better contours and boundaries, but the color contrast is far from the method of the present invention; the comparison shows that the method can better keep space detail information and target edge information, the edge and the house texture detail are clearest, the color contrast is more suitable for human visual perception, and the comprehensive effect is better.
Objective comparison: and selecting four evaluation indexes of an image information saturation index QMI, a blind evaluation index sigma, an image structure similarity evaluation index SSIM and an image contrast gain Cg to objectively evaluate all the fusion results. The QMI is used for measuring the retention condition of the original information of the source image in the final fusion image, and the larger the value of the QMI is, the more the retention information is, the better the effect is; the blind evaluation index sigma is used for evaluating the color recovery performance of the fusion algorithm, and the smaller the sigma is, the better the effect of the fusion algorithm is; the range of the structural similarity SSIM is 0 to 1, and the SSIM value is 1 when the images are completely the same; the image contrast gain Cg represents the average contrast difference between the fused image and the original image, and can more intuitively represent the difference in image contrast. Tables 1 and 2 show the data presented by the two sets of images fused using the method of the present invention and the three methods described above.
TABLE 1 Objective evaluation index for first group of images
QMI | s | SSIM | Cg | |
DWT | 0.6255 | 0.0062 | 0.6447 | 0.6326 |
NSCT+PCNN | 0.5009 | 0.0481 | 0.6780 | 0.6238 |
Tetrolet+PCNN | 0.7018 | 0.0015 | 0.6092 | 0.7486 |
Methods of the invention | 0.8681 | 0.0001 | 0.5223 | 0.8467 |
TABLE 2 Objective evaluation index of the second group of images
QMI | s | SSIM | Cg | |
DWT | 0.6311 | 0.0047 | 0.7457 | 0.5392 |
NSCT+PCNN | 0.5179 | 0.0343 | 0.7744 | 0.6127 |
Tetrolet+PCNN | 0.7144 | 0.0010 | 0.7473 | 0.7586 |
Methods of the invention | 0.8740 | 0.0005 | 0.6645 | 0.8361 |
As can be seen from the data in tables 1 and 2, compared with the traditional method, the method of the invention has the maximum information saturation index QMI value of the fused image, which indicates that the most information is retained and the effect is the best; the blind evaluation index sigma value is minimum, and the fusion algorithm effect is best; the minimum value of the structural similarity SSIM and the maximum gain Cg of the image contrast show that the image color contrast is the highest after the fusion by the method of the invention.
In conclusion, compared with the traditional method, the method provided by the invention has obvious advantages in objective evaluation indexes such as image saturation, color recovery performance, structural similarity and contrast.
Although the preferred embodiments of the present patent have been described in detail, the present patent is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present patent within the knowledge of those skilled in the art.
Claims (10)
1. A near-infrared and color visible light image fusion algorithm based on Tetrolet transformation is characterized by comprising the following steps of:
the method comprises the following steps: converting the visible light image from RGB space to HSI space to obtain chromaticity IHComponent, saturation ISComponent and luminance component Ib;
Step two: processing low-frequency subband coefficients of an image for a luminance component IbAnd an infrared image IiRespectively performing Tetrolet conversion to obtain corresponding low-frequency coefficientsAnd Ti lAnd high frequency coefficientAnd Ti hFor low frequency coefficientAnd Ti lFusing by adopting an expected maximum algorithm to obtain fused low-frequency coefficientsProcessing high-frequency subband coefficients of an image, for high-frequency coefficientsAnd Ti hAdopting improved self-adaptive PCNN to carry out fusion to obtain a fused high-frequency coefficientTo pairAndperforming Tetrolet inverse transformation to obtain a fused brightness image If;
Step three: for degree of saturation ISThe component is subjected to nonlinear stretching to obtain a stretched saturation component I'S;
Step four: by means of IH、I'S、IfReplacing the original chromaticity IHComponent, saturation ISComponent and luminance component IbAnd then reversely mapping to an RGB space to obtain a final fusion image.
2. The near-infrared and color visible light image fusion algorithm based on the Tetrolet transform of claim 1, wherein the conversion of the visible light image from the RGB space to the HSI space in step 1 adopts a standard modeling method, and the specific formula is as follows:
H=H+2π,if H<0
S=Max-Min
the R, G, B components in the formula are normalized data, Max represents the (R, G, B) maximum value, Min represents the (R, G, B) minimum value, and H, S, I represents the converted chromaticity, saturation, and luminance, respectively.
3. The near-infrared and color visible light image fusion algorithm based on the Tetrolet transform as claimed in claim 1, wherein in the step two, in the Tetrolet transform, the maximum value of the first-order norm is used for filtering instead of the minimum value of the original first-order norm for filtering, and the selection formula is as follows:
wherein G isd,(c),zRepresenting high frequency coefficients, S low frequency coefficients, c represents the corresponding Tetrolet decomposition block.
4. The near-infrared and color visible light image fusion algorithm based on Tetrolet transform of claim 3, wherein the second step processes image low-frequency subband coefficients, and for low-frequency coefficient Tb lAnd Ti lFusing with expectation maximization algorithm, fusing low-frequency coefficients based on expectation maximization algorithm, and searching potential distribution from given incomplete data setMaximum likelihood estimation, wherein an expectation maximum algorithm is applied to fusion of low-frequency coefficient images; k low-frequency images I to be fusedkK ∈ {1,2, …, K } from an unknown image F, IkOne common model of (a) is:
Ik(i,j)=αk(i,j)F(l)+εk(i,j)
wherein, αk(i, j) ∈ { -1,0,1} is the sensor selectivity factor, εk(i, j) is random noise at location (i, j), and when the image does not have the same morphology, the sensor selectivity factor α is usedk:
In the expectation maximization algorithm, the local noise epsilon is treatedk(i, j) is modeled using a mixture of M Gaussian probability density functions, as follows:
5. the near-infrared and color visible light image fusion algorithm based on the Tetrolet transform of claim 4, wherein the fusion step of the low-frequency coefficients in the second step is as follows:
and S1, normalizing and normalizing the image data:
I'k(i,j)=(Ik(i,j)-μ)H
wherein, I'kAnd IkRespectively obtaining an image and an original image after standard normalization, wherein mu is the mean value of the whole image, and H is the gray level of the image;
s2, setting the initial value of each parameter, adopting the method of average imaging sensor image, assuming the fused image as F,
wherein, wkThe weight coefficient of the image to be fused;
the overall variance of the pixel neighborhood window L ═ p × q is:
the initialized variance of the Gaussian mixture model is:
s3, calculating the conditional probability density of the m-th term of the Gaussian mixture distribution under the condition of given parameters:
s4, updating parameters αk,αkIs selected among { -1,0,1} so as to maximize the value of the following formula,
s5, recalculating the conditional probability density distribution gm,k,lUpdate of the real scene f (l):
and S6, updating model parameters of the noise:
s7, repeating the steps S3 to S6 by using the new parameters, and when the parameters converge to a certain specific range, determining the fused image as:
6. the near-infrared and color visible light image fusion algorithm based on the Tetrolet transform as claimed in claim 1, wherein the image high-frequency subband coefficients are processed in the second step, the high-frequency coefficient fusion is performed by using an improved adaptive PCNN, and a Sobel operator adaptively controls a threshold of the PCNN, specifically as follows:
where H (i, j) is the high frequency subband coefficient.
7. The near-infrared and color visible light image fusion algorithm based on the Tetrolet transform as claimed in claim 6, wherein the high-frequency coefficient fusion in the second step is to take the Tetrolet coefficient corresponding to the ignition times, when N is equal to N, the iteration is stopped, and the initial test value is taken asAnd in formula (25)Obtaining the fused high-frequency sub-band coefficient yFComprises the following steps:
8. The near-infrared and color visible light image fusion algorithm based on the Tetrolet transform of claim 1, wherein the adaptive stretching method for the saturation channel image in the third step specifically comprises the following steps:
wherein, I'SMax is the maximum value of the pixel of the saturation component, and Min is the minimum value of the pixel of the saturation component.
9. The near-infrared and color visible light image fusion algorithm based on the Tetrolet transform of claim 1, wherein the fusion rule in the fourth step is to compare the difference between the standard variances of the local regions of the infrared image and the visible light image with a threshold value, wherein the larger the difference is the coefficient of the larger image, and the larger the difference is the average value of the coefficients of the two images, which is specifically as follows:
wherein, FL,FRepresents the low-frequency component after the fusion,representing the luminance low frequency components of the processed visible light image,representing the low-frequency component, σ, of the processed near-infrared imageVi,I-σInAnd the difference value of the mean square error of the brightness low-frequency component of the visible light image and the near infrared image low-frequency component is represented.
10. A near-infrared and color visible image fusion algorithm based on a Tetrolet transform as claimed in claim 1, wherein the th value is between 0.1 and 0.3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911280623.8A CN111080568B (en) | 2019-12-13 | 2019-12-13 | Near infrared and color visible light image fusion algorithm based on Tetrolet transformation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911280623.8A CN111080568B (en) | 2019-12-13 | 2019-12-13 | Near infrared and color visible light image fusion algorithm based on Tetrolet transformation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111080568A true CN111080568A (en) | 2020-04-28 |
CN111080568B CN111080568B (en) | 2023-05-26 |
Family
ID=70314281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911280623.8A Active CN111080568B (en) | 2019-12-13 | 2019-12-13 | Near infrared and color visible light image fusion algorithm based on Tetrolet transformation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111080568B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837254A (en) * | 2021-02-25 | 2021-05-25 | 普联技术有限公司 | Image fusion method and device, terminal equipment and storage medium |
CN113542595A (en) * | 2021-06-28 | 2021-10-22 | 北京沧沐科技有限公司 | Capturing and monitoring method and system based on day and night images |
WO2021217642A1 (en) * | 2020-04-30 | 2021-11-04 | 深圳市大疆创新科技有限公司 | Infrared image processing method and apparatus, and movable platform |
CN113688707A (en) * | 2021-03-04 | 2021-11-23 | 黑芝麻智能科技(上海)有限公司 | Face anti-spoofing method |
CN113724164A (en) * | 2021-08-31 | 2021-11-30 | 南京邮电大学 | Visible light image noise removing method based on fusion reconstruction guidance filtering |
CN114331937A (en) * | 2021-12-27 | 2022-04-12 | 哈尔滨工业大学 | Multi-source image fusion method based on feedback iterative adjustment under low illumination condition |
CN114663311A (en) * | 2022-03-24 | 2022-06-24 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN114708181A (en) * | 2022-04-18 | 2022-07-05 | 烟台艾睿光电科技有限公司 | Image fusion method, device, equipment and storage medium |
CN116844116A (en) * | 2023-09-01 | 2023-10-03 | 山东乐普矿用设备股份有限公司 | Underground comprehensive safety monitoring system based on illumination control system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4734776A (en) * | 1986-08-15 | 1988-03-29 | General Electric Company | Readout circuit for an optical sensing charge injection device facilitating an extended dynamic range |
CN102063710A (en) * | 2009-11-13 | 2011-05-18 | 烟台海岸带可持续发展研究所 | Method for realizing fusion and enhancement of remote sensing image |
US20110293179A1 (en) * | 2010-05-31 | 2011-12-01 | Mert Dikmen | Systems and methods for illumination correction of an image |
CN103745470A (en) * | 2014-01-08 | 2014-04-23 | 兰州交通大学 | Wavelet-based interactive segmentation method for polygonal outline evolution medical CT (computed tomography) image |
US20180300906A1 (en) * | 2015-10-09 | 2018-10-18 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for fusion display of thermal infrared and visible image |
CN108898569A (en) * | 2018-05-31 | 2018-11-27 | 安徽大学 | Fusion method for visible light and infrared remote sensing images and fusion result evaluation method thereof |
CN109614996A (en) * | 2018-11-28 | 2019-04-12 | 桂林电子科技大学 | The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image |
CN109658371A (en) * | 2018-12-05 | 2019-04-19 | 北京林业大学 | The fusion method of infrared image and visible images, system and relevant device |
CN110111292A (en) * | 2019-04-30 | 2019-08-09 | 淮阴师范学院 | A kind of infrared and visible light image fusion method |
CN110335225A (en) * | 2019-07-10 | 2019-10-15 | 四川长虹电子系统有限公司 | The method of infrared light image and visual image fusion |
-
2019
- 2019-12-13 CN CN201911280623.8A patent/CN111080568B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4734776A (en) * | 1986-08-15 | 1988-03-29 | General Electric Company | Readout circuit for an optical sensing charge injection device facilitating an extended dynamic range |
CN102063710A (en) * | 2009-11-13 | 2011-05-18 | 烟台海岸带可持续发展研究所 | Method for realizing fusion and enhancement of remote sensing image |
US20110293179A1 (en) * | 2010-05-31 | 2011-12-01 | Mert Dikmen | Systems and methods for illumination correction of an image |
CN103745470A (en) * | 2014-01-08 | 2014-04-23 | 兰州交通大学 | Wavelet-based interactive segmentation method for polygonal outline evolution medical CT (computed tomography) image |
US20180300906A1 (en) * | 2015-10-09 | 2018-10-18 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for fusion display of thermal infrared and visible image |
CN108898569A (en) * | 2018-05-31 | 2018-11-27 | 安徽大学 | Fusion method for visible light and infrared remote sensing images and fusion result evaluation method thereof |
CN109614996A (en) * | 2018-11-28 | 2019-04-12 | 桂林电子科技大学 | The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image |
CN109658371A (en) * | 2018-12-05 | 2019-04-19 | 北京林业大学 | The fusion method of infrared image and visible images, system and relevant device |
CN110111292A (en) * | 2019-04-30 | 2019-08-09 | 淮阴师范学院 | A kind of infrared and visible light image fusion method |
CN110335225A (en) * | 2019-07-10 | 2019-10-15 | 四川长虹电子系统有限公司 | The method of infrared light image and visual image fusion |
Non-Patent Citations (9)
Title |
---|
YU HUANG: "Fusion of visible and infrared image based on stationary tetrolet transform" * |
冯鑫: "基于深度玻尔兹曼模型的红外与可见光图像融合" * |
冯鑫: "基于超分辨率和组稀疏表示的多聚焦图像融合" * |
杨晟炜: "基于NSST与IHS的红外与彩色可见光图像融合" * |
沈瑜: "基于Tetrolet变换的红外与可见光融合" * |
沈瑜: "基于多尺度几何分析的红外与可见光图像融合方法研究" * |
董亚楠: "基于Tetrolet变换的红外与可见光图像融合算法研究" * |
邱泽敏: "结合区域与边缘特征的红外与可见光图像融合算法" * |
高继森: "基于改进Tetrolet变换的图像融合算法研究" * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021217642A1 (en) * | 2020-04-30 | 2021-11-04 | 深圳市大疆创新科技有限公司 | Infrared image processing method and apparatus, and movable platform |
CN112837254A (en) * | 2021-02-25 | 2021-05-25 | 普联技术有限公司 | Image fusion method and device, terminal equipment and storage medium |
CN112837254B (en) * | 2021-02-25 | 2024-06-11 | 普联技术有限公司 | Image fusion method and device, terminal equipment and storage medium |
CN113688707A (en) * | 2021-03-04 | 2021-11-23 | 黑芝麻智能科技(上海)有限公司 | Face anti-spoofing method |
US12002294B2 (en) | 2021-03-04 | 2024-06-04 | Black Sesame Technologies Inc. | RGB-NIR dual camera face anti-spoofing method |
CN113542595A (en) * | 2021-06-28 | 2021-10-22 | 北京沧沐科技有限公司 | Capturing and monitoring method and system based on day and night images |
CN113724164B (en) * | 2021-08-31 | 2024-05-14 | 南京邮电大学 | Visible light image noise removing method based on fusion reconstruction guidance filtering |
CN113724164A (en) * | 2021-08-31 | 2021-11-30 | 南京邮电大学 | Visible light image noise removing method based on fusion reconstruction guidance filtering |
CN114331937A (en) * | 2021-12-27 | 2022-04-12 | 哈尔滨工业大学 | Multi-source image fusion method based on feedback iterative adjustment under low illumination condition |
CN114331937B (en) * | 2021-12-27 | 2022-10-25 | 哈尔滨工业大学 | Multi-source image fusion method based on feedback iterative adjustment under low illumination condition |
CN114663311A (en) * | 2022-03-24 | 2022-06-24 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN114708181A (en) * | 2022-04-18 | 2022-07-05 | 烟台艾睿光电科技有限公司 | Image fusion method, device, equipment and storage medium |
CN116844116B (en) * | 2023-09-01 | 2023-12-05 | 山东乐普矿用设备股份有限公司 | Underground comprehensive safety monitoring system based on illumination control system |
CN116844116A (en) * | 2023-09-01 | 2023-10-03 | 山东乐普矿用设备股份有限公司 | Underground comprehensive safety monitoring system based on illumination control system |
Also Published As
Publication number | Publication date |
---|---|
CN111080568B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111080568B (en) | Near infrared and color visible light image fusion algorithm based on Tetrolet transformation | |
CN108876735B (en) | Real image blind denoising method based on depth residual error network | |
CN107194904B (en) | NSCT area image fusion method based on supplement mechanism and PCNN | |
CN109191390A (en) | A kind of algorithm for image enhancement based on the more algorithm fusions in different colours space | |
CN111968041A (en) | Self-adaptive image enhancement method | |
CN106056564B (en) | Edge clear image interfusion method based on joint sparse model | |
CN111476725A (en) | Image defogging enhancement algorithm based on gradient domain oriented filtering and multi-scale Retinex theory | |
CN113837974B (en) | NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm | |
CN112700389B (en) | Active sludge microorganism color microscopic image denoising method | |
CN107358585A (en) | Misty Image Enhancement Method based on fractional order differential and dark primary priori | |
CN107085835B (en) | Color image filtering method based on quaternary number Weighted Kernel Norm minimum | |
CN106651817A (en) | Non-sampling contourlet-based image enhancement method | |
CN116664462B (en) | Infrared and visible light image fusion method based on MS-DSC and I_CBAM | |
CN113313702A (en) | Aerial image defogging method based on boundary constraint and color correction | |
CN107689038A (en) | A kind of image interfusion method based on rarefaction representation and circulation guiding filtering | |
CN111563866B (en) | Multisource remote sensing image fusion method | |
CN112184646A (en) | Image fusion method based on gradient domain oriented filtering and improved PCNN | |
Feng et al. | Low-light color image enhancement based on Retinex | |
Kumar et al. | A two-level hybrid image fusion technique for color image contrast enhancement | |
CN113850744A (en) | Image enhancement algorithm based on self-adaptive Retinex and wavelet fusion | |
CN106803236A (en) | Asymmetric correction method based on fuzzy field singular value decomposition | |
CN114897757B (en) | NSST and parameter self-adaptive PCNN-based remote sensing image fusion method | |
Hsu et al. | Region-based image fusion with artificial neural network | |
CN113177904B (en) | Image fusion method and system | |
CN115760630A (en) | Low-illumination image enhancement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |