CN114897751A - Infrared and visible light image perception fusion method based on multi-scale structural decomposition - Google Patents
Infrared and visible light image perception fusion method based on multi-scale structural decomposition Download PDFInfo
- Publication number
- CN114897751A CN114897751A CN202210381391.0A CN202210381391A CN114897751A CN 114897751 A CN114897751 A CN 114897751A CN 202210381391 A CN202210381391 A CN 202210381391A CN 114897751 A CN114897751 A CN 114897751A
- Authority
- CN
- China
- Prior art keywords
- image
- infrared
- scale
- visible light
- contrast
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008447 perception Effects 0.000 title claims abstract description 48
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 24
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 10
- 230000004927 fusion Effects 0.000 claims abstract description 63
- 238000000034 method Methods 0.000 claims abstract description 63
- 230000000007 visual effect Effects 0.000 claims abstract description 24
- 238000004321 preservation Methods 0.000 claims abstract description 4
- 230000004044 response Effects 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 11
- 239000000126 substance Substances 0.000 claims description 11
- 230000001629 suppression Effects 0.000 claims description 11
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 230000002776 aggregation Effects 0.000 claims description 5
- 238000004220 aggregation Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000002457 bidirectional effect Effects 0.000 claims description 3
- 230000000379 polymerizing effect Effects 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 5
- 230000016776 visual perception Effects 0.000 abstract description 4
- 230000007547 defect Effects 0.000 abstract description 2
- 230000007246 mechanism Effects 0.000 description 6
- 230000035945 sensitivity Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000006116 polymerization reaction Methods 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an infrared and visible light image perception fusion method based on multi-scale structural decomposition, and belongs to the technical field of multi-sensor image fusion. The method fully considers relevant characteristics of a Human Visual System (HVS), and can help solve potential defects of current fusion research in visual information perception. Compared with other algorithms, the method constructs a multi-scale structure decomposition method based on scale perception edge preservation, and can obtain image structures with different scales, wherein edge information is kept in each layer, and small-scale details can be regarded as structures with fine spatial scales. In addition, the method fully considers the significant information of the pixel level and the large-scale structural information in the fusion process, so that a fusion image with rich information and good visual perception effect can be obtained.
Description
Technical Field
The invention relates to an infrared and visible light image perception fusion method based on multi-scale structural decomposition, and belongs to the technical field of multi-sensor image fusion.
Background
The image fusion technology has important significance in image processing and computer vision, and is widely applied to the fields of military affairs, remote sensing, medical image processing, industrial detection and the like. Among them, infrared and visible image fusion has become one of the most studied branches due to its uniqueness in application. Visible light images are generally of higher resolution and contain important detailed information of the scene, but their imaging quality is susceptible to external factors such as weather, light, etc. In contrast, infrared images contain hidden information that is lost in visible light images and can reflect the thermal radiation information of the scene, but their detail information is often poor. Therefore, the information of the visible light image and the infrared image are complementary to a certain extent, and a relatively complete scene mapping can be obtained by fusing the infrared image and the visible light image. One basic principle of infrared and visible image fusion is to preserve as much salient information as possible in the infrared and visible images. In addition, it is desirable that the fused image introduces less artifacts and has good visual perception.
In general, infrared and visible image fusion includes three important steps, namely feature extraction, fusion strategy formulation, and image reconstruction. Depending on the analysis tools used in the above process, existing infrared and visible light image fusion algorithms can be classified into six categories: multi-scale transform based methods, subspace based methods, sparse representation based methods, saliency based methods, deep learning based methods, and hybrid methods. Among the methods, the most widely studied and applied method is a fusion method based on multi-scale transformation, which first decomposes a source image by using a transformation technology to obtain multi-scale information, and then fuses each scale information one by using a certain fusion strategy. Among them, the laplacian pyramid is a classical transformation technique commonly used for multi-scale decomposition, and thus multi-scale decomposition techniques such as a contrast pyramid, a steerable pyramid, and a morphological pyramid are derived. Wavelet transformation, another important multi-scale decomposition tool, provides a method for decomposing an image into a low-pass layer image and detail layer images in different directions, so that noise in a fused image can be reduced. On the basis of the above, researchers have proposed improved analysis tools such as discrete wavelet transform, contourlet transform, shear wave transform, etc., which have better decomposition performance. In addition, many edge-preserving filters, such as bilateral filters, guided filters, are proposed and widely used for multi-scale decomposition of images, which can preserve the spatial continuity of the image structure and reduce the generation of halos and artifacts. In the fusion process, the fusion weight is often determined by adopting a maximum value selection and weighted average strategy. Toet et al uses contrast pyramid transformation to decompose the source image and then selects the maximum contrast value as the fusion coefficient. Adu et al propose to use a weighted average strategy to calculate the weight coefficients of the decomposed images, and then to fuse the images of the same scale by the weight coefficients.
Existing fusion methods focus more on preserving significant information or avoiding artifacts in infrared and visible images, and take less into account perceptual issues and the characteristics of the Human Visual System (HVS). Considering the mechanism of the HVS not only can produce what we generally consider visually pleasing fusion results, but more importantly it can help address potential drawbacks of the current fusion framework. Generally, the image fusion process may involve using visual features from different source images, in particular by comparison between them to determine fusion weights or to obtain an appropriate information fusion strategy. However, visual features are susceptible to external physical conditions (e.g., ambient lighting, characteristics of different sensors, etc.), which means that these features are not placed on an equal and unambiguous basis when compared and fused, which can affect fusion quality, especially for infrared and visible image fusion, the response characteristics of the two sensors vary greatly, and the visual information in the visible spectrum can be severely affected by changes in external lighting conditions.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method transforms the physical strength of source images into a visual response space of an HVS (high voltage sequence transformation) so that input information from different source images can be compared and fused in a unified human visual response space, and all features in the space are in the same perception state, thereby eliminating external physical factors which can influence the fusion process, finally generating a fusion image with rich information and good visual perception effect, solving the current potential defects and improving the fusion effect.
The technical scheme of the invention is as follows:
an infrared and visible light image perception fusion method based on multi-scale structural decomposition comprises the following steps:
The conversion method comprises the following steps:
(1) calculating multi-scale adaptive contrast of infrared and visible light images according to contrast sensitivity and local adaptive mechanism of HVSAnd
wherein the content of the first and second substances,is the j-th layer low-pass image of the infrared image,is a jth layer band-pass image of the infrared image,is the jth layer low pass image of the visible image,the j-th layer bandpass image of the visible light image, t is an adaptive parameter, and is a set value, preferably, t is 1, α is an adjustment parameter, preferably, α is 0.8,
(2) calculating initial value of multi-scale perception contrast of infrared and visible light images according to nonlinear conversion mechanism of HVSAnd
wherein h is a threshold, preferably, h is 0.5; c is a constant, and is set to be 21.3, p values are different in different scale layers, and when j is increased from small to large, and when N is set to be 4, the values of p are respectively as follows: 1.40,1.15,1.04, 1.15;
(3) for the initial value of the multi-scale perception contrast obtained in the step (2)Andcarrying out noise and intensity saturation suppression to obtain the final multi-scale perception contrast of the infrared imageAnd multi-scale perceptual contrast of visible light images
The noise suppression method is as follows:
where th is a threshold to distinguish between noise and useful information,the average gray value of the source image is obtained; since noise usually occurs in a small-scale layer, it is common to set
The strength was suppressed as follows:
unlike noise, overexposure often occurs at the large scale layer, rAs overexposure suppression parameter, I 0 Representing a source image;
step 3, according to the visual characteristics of human eyes, the lowest layer low-pass image of the infrared and visible light images is processedAndself-adaptive adjustment is carried out, fusion weight is determined based on a significance strategy, and then infrared and visible light bottom layer low-pass fusion images are obtained
The specific method comprises the following steps:
(1) for the bottom low-pass image of infrared and visible light imagesAndperforming self-adaptive adjustment to obtain an adjusted bottommost layer low-pass imageAnd
wherein l is a threshold value which reflects the average background brightness of the adjusted lowest layer low-pass image, and is set to be 128;
(2) determining visible light according to significance fusion strategyLowest layer low pass imageFusion weight w of (c):
whereinWhich represents a gaussian filtering operation, is shown,andthe saliency maps of the lowest low-pass image of the visible and infrared images, respectively, are calculated as follows:
wherein the content of the first and second substances,andrespectively representing the gray values of a pixel point n and an adjacent pixel point k in an infrared image bottom layer image area omega,andrespectively representing pixel point n and adjacent pixel point k in visible light image bottom layer image area omegaThe gray value of (a);
the method comprises the following specific steps:
(1) for pixel level saliency, contrast will be perceivedAndrespectively polymerizing from small scale to large scale to obtain the perception contrast of the jth layerPixel level saliency ofAnd j-th layer perceived contrastPixel level saliency of
(2) For structural significance, contrast will be perceivedAndaggregating from large scale to small scale separately, and also taking into account the adjusted lowest low-pass imageAndthereby obtaining the j-th layer perception contrastStructural significance ofAnd j-th layer perceived contrastStructural significance of
Where sf is the structural significance function, which is calculated as follows:
gamma is a balance parameter, and gamma is 0.1; s 1 、s 2 In relation to the eigenvalues of the gradient covariance matrix C, it is given by:
wherein I x (X) and I y (X) respectively represent partial windows W i The gradient of pixel points X in the X and y directions.
(3) Computing the j-th layer perceived contrastOverall significance ofAnd j-th layer perceived contrastOverall significance of
Wherein beta is a balance parameter, beta is 5,a saliency map for layer j of an infrared image,the saliency adjustment map of the jth layer of the visible light image has the following values:
wherein the content of the first and second substances,in the form of a source infrared image,representing the average gray value of the infrared image in the neighborhood omega; sg denotes a sigmoid function which is,u is a control parameter, and when u takes different values, the sigmoid function has different shapes, and here, u is set to 5.
(4) Overall saliency from perceived contrast of layers of visible and infrared imagesAndthe fusion weight of each layer of perception contrast of the visible light image is as follows:
wherein, in different scale layers, u has different values, and u is equal to0.1*2 4-j 。
Further, a fusion image of the j-th layer perception contrast of the infrared and visible light imagesIs composed of
And 5, obtaining a final fusion image through inverse transformation and reconstruction processes:
wherein the content of the first and second substances,is the lowest-level low-pass fused image,resulting from the inverse transformation process in step 2:
Advantageous effects
1. The invention provides a novel sensing framework based on multi-scale structural decomposition, which is used for fusing infrared and visible light images. The proposed framework fully considers relevant characteristics of the HVS and can help solve potential drawbacks of current fusion studies in visual information perception.
2. The invention constructs a multi-scale structure decomposition method based on an SAEP filter to design a perception fusion framework. Compared with other algorithms, the method has excellent edge retention and scale perception characteristics, can obtain image structures of different scales, wherein edge information is kept in each layer, and small-scale details can be regarded as structures with fine spatial scales.
3. The invention provides a novel bidirectional significance aggregation algorithm for determining the fusion weight of multi-scale perception contrast, and the algorithm fully considers pixel-level significance information and large-scale structure information, so that a fusion image with rich information and good visual perception effect can be obtained.
4. The framework proposed by the present invention combines some key characteristics of the HVS, including multi-scale processing channels, contrast sensitivity, local adaptation, and supra-threshold characteristics. All relevant key features of the HVS are integrated synthetically into the proposed framework to simulate human visual response in complex scenes, creating a visual response space in the HVS that is representative of multi-scale perceptual contrast.
5. The present invention constructs a multi-scale structural decomposition by utilizing a scale-aware edge preservation (SAEP) filter that has good scale separation and edge preservation characteristics. By decomposition, an image structure of different scales is obtained, with edges remaining in each layer, and the details therein can be regarded as an image structure with a fine spatial scale.
6. In the fusion process, the invention proposes a two-way saliency aggregation strategy to fuse the perceptual contrast of each scale, one direction is aggregated from top to bottom in a scale space to obtain pixel-level saliency, and the other direction is aggregated reversely to calculate structural saliency. The two types of saliency are then combined and fusion weights are calculated according to the sigmoid function.
Drawings
FIG. 1 is a sigmoid function with u taking different values;
FIG. 2 is a flow chart of the fusion framework of the present invention;
FIG. 3 is a comparison of fused images of infrared and visible light images obtained by different methods. The image processing method comprises the following steps of (a) obtaining an infrared image, (b) obtaining a visible light image, (c) obtaining a fused image of the infrared image and the visible light image obtained by a WLS method, (d) obtaining a fused image of the infrared image and the visible light image obtained by a U2Fusion method, (e) obtaining a fused image of the infrared image and the visible light image obtained by an IFCNN method, and (f) obtaining the fused image of the infrared image and the visible light image obtained by the method.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
The invention provides an infrared and visible light image perception fusion framework based on multi-scale structural decomposition, which converts a source image into a visual response space of a Human Visual System (HVS) for comparison and fusion by taking the reference of a relevant mechanism of the HVS.
Based on this, the specific embodiments of the present invention are:
suppose the input infrared and visible images are I respectively r And I v As shown in fig. 2, the fusion steps are as follows:
step 1: according to the characteristics of a multi-scale processing channel of the HVS, the invention constructs multi-scale structure decomposition based on a scale perception edge preserving (SAEP) filtering algorithm to obtain infrared and visible light multi-scale filtering imagesAnd
I j =SAEP(I j-1 ,λ j ,r j,0 ,r j,1 ),j=1,2,…,N
wherein I 0 λ is the global smoothing weight, λ 1 =0.1,λ j+1 =λ j + 0.9; r is a scale parameter, the scale is [ r ] j,0 ,r j,1 ]The image structure in between will be smoothed. In this example, r 1,0 =0,r 1,1 =4,r j+1,1 =2r j,1 ,r j+1,0 =r j,1 The filtering number N is 4.
B j =I j-1 -I j ,j=1,2,…,N
step 2: based on the correlation mechanisms such as contrast sensitivity, local adaptation and super-threshold characteristics of the HVS, the band-pass and low-pass images obtained by the multi-scale decomposition are converted into the visual response space of the HVS to obtain the multi-scale perception contrast of the infrared imageAnd multi-scale perceptual contrast of visible light images
(1) Calculating multi-scale adaptive contrast of infrared and visible light images according to contrast sensitivity and local adaptive mechanism of HVSAnd
wherein, I j And B j A jth layer low pass image and a band pass image of the infrared or visible light image, respectively; t is a self-adaptive parameter, and different values are taken according to the characteristics of the infrared image and the visible light image; α is a regulation parameter, and preferably, α is 0.8.
(2) The adaptive contrast resulting from (1) adapts to some extent to human vision, but it is still not the perceptual contrast in the visual response space. The HVS presents a non-linear transfer function that helps to achieve perceptual contrast in this unified space. Obtaining multi-scale perception contrast of infrared and visible light images according to HVS nonlinear conversion mechanismAnd
wherein R is j Taking a positive value, if the positive value is negative, taking the absolute value as an input to calculate, and inverting the output of the absolute value; h is a threshold, preferably, h is 0.5; c is a constant, and is 21.3; in different scale layers, the p values are different, and when the scale is increased from small to large, the values are respectively as follows: 1.40,1.15,1.04,1.15,1.35,1.93.
(3) Further carrying out noise and intensity saturation suppression on the obtained perception contrast to obtain the final multi-scale perception contrast of the infrared imageAnd multi-scale perceptual contrast of visible light imagesIn general, smaller scale layers contain more noise, while larger scale layers have more over-exposure. Therefore, different suppression methods are applied in different frequency layers.
The noise suppression method is as follows:
where th is a threshold to distinguish between noise and useful information,the average gray value of the source image is obtained; in this example, let
The suppression method of intensity saturation is as follows:
wherein r is an overexposure suppression parameter, I 0 A source image is represented.
And step 3: the lowest low-pass layers of the infrared and visible images reflect background information of the scene. According to the visual characteristics of human eyes, the lowest layer low-pass image of the infrared image and the visible light image is subjected toAndcarrying out self-adaptive adjustment, and determining fusion weight based on a significance strategy:
(1) for the bottom low-pass image of infrared and visible light imagesAndperforming self-adaptive adjustment to obtain an adjusted bottommost layer low-pass imageAnd
wherein l is a threshold value which reflects the average background brightness of the adjusted lowest layer low-pass image, and is set to be 128; α is a regulation parameter, and preferably, α is 0.8.
(2) Fusion strategy according to significanceCalculating the low-pass image of the bottom layer of visible lightFusion weight w of (c):
whereinWhich represents a gaussian filtering operation, is shown,andthe saliency maps of the lowest low-pass image of the visible and infrared images, respectively, are calculated as follows:
wherein A is N (n) and A N (k) Respectively representing the gray values of the pixel point n and the adjacent pixel point k in the bottom layer image area omega.
And 4, step 4: in the visual response space of the HVS, the perceived contrast typically contains fine pixel-level saliency information and structural information of the image. Thus, multi-scale perceptual contrast for infrared imagesAnd it can be seenMulti-scale perceptual contrast of light imagesThe invention proposes a two-way significance aggregation strategy to fully aggregate these features and determine fusion weights based on this. One of which is a top-down combination of pixel-level saliency, the other is a polymerization of the structural saliency in the opposite direction.
(1) For pixel level significance, the method superposes the perception contrast from small scale to large scale to obtain the perception contrast C of the jth layer j Pixel level saliency of D j :
The j-th layer pixel level significance comprises fine-grained information of a current layer and a smaller-scale layer, and more complete details can be reserved, so that a final fused image is finer and smoother.
(2) For structural significance, the invention aggregates the perceived contrast from large scale to small scale, and in addition, due to the adjusted lowest low-pass layer image A N Contains the basic structural information of the source image and we take this into account to obtain relatively complete structural saliency. Layer j perceived contrast C j Structural significance of (1) G j Comprises the following steps:
wherein sf is a structural significance function, can reflect structural information such as corners and the like in the image, and is calculated as follows:
α is a balance parameter, and in the present invention, α is made 0.1; s 1 、s 2 With respect to the eigenvalues of the gradient covariance matrix C, it is obtained by the following equation:
wherein I x (X) and I y (X) respectively denote partial windows w i The gradient of the inner pixel point X in the X and y directions.
(3) Calculating the j-th layer perception contrast C j Overall significance of S j :
S j =M j *(D j +β*G j )
Wherein beta is a balance parameter, and beta is 5; denotes element-by-element multiplication operations; m j The map is adjusted for saliency, which can help the fused image capture more highlight target information and less noise from the infrared image. For infrared and visible images, the values are as follows:
wherein the content of the first and second substances,in the form of a source infrared image,representing the average gray value of the infrared image in the neighborhood omega; sg denotes the sigmoid function and,u is a control parameter, and when u takes different values, the sigmoid function has different shapes, as shown in fig. 1, where u is 5.
(4) Overall saliency from perceived contrast of layers of visible and infrared imagesAndthe fusion weight of each layer of perceived contrast of the visible light image is as follows:
wherein, in different scale layers, u takes different values, in this example, u is 0.1 × 2 4-j 。
Further, the fusion process of the infrared and visible image perception contrast layers can be described as
Wherein the content of the first and second substances,and the fusion image is the j-th layer of perceived contrast.
And 5: lowest-layer low-pass fusion image obtained in visual response space of HVS based on step 3 and step 4And fused, individual scale perceptual contrast imagesObtaining a final fused image through inverse transformation and reconstruction processes:
wherein the content of the first and second substances,resulting from the inverse transformation process in step 2:
FIG. 3 is a comparison of fused images and other methods in accordance with the teachings of the present invention. Wherein, (a) is an infrared image, (b) is a visible light image, and (c), (d), (e) and (f) are Fusion results of the WLS method, the U2Fusion method, the IFCNN method and the method of the invention respectively. It can be seen that the inventive framework can achieve better fusion results by performing fusion in a consistent and well-defined visual response space, since the relevant properties of the human visual system are fully taken into account.
Claims (10)
1. The infrared and visible light image perception fusion method based on multi-scale structural decomposition is characterized by comprising the following steps:
step 1, performing multi-scale structural decomposition on infrared and visible light images of the same scene to obtain infrared and visible light multi-scale filtering imagesAndwherein j is 0,1, …, N; n is the number of scale layers;
step 2, the infrared and visible light multi-scale filtering image obtained in the step 1 is processedAndconverting the image into a visual response space of an HVS (high Voltage stereo System), and obtaining the multi-scale perception contrast of the infrared imageAnd multi-scale perceptual contrast of visible light images
Step 3, the lowest layer low-pass image of the infrared and visible light imagesAndself-adaptive adjustment is carried out, fusion weight is determined based on a significance strategy, and an infrared and visible light bottom layer low-pass fusion image is obtained
Step 4, determining the multi-scale perception contrast of the infrared imageAnd multi-scale perceptual contrast of visible light imagesObtaining a fusion image of the j-th layer perception contrast of the infrared and visible light images;
step 5, low-pass fusion image of the infrared and visible light bottom layer obtained in the step 3And 4, obtaining the final fusion image by the fusion image of the infrared image and the visible light image with each layer perception contrast through inverse transformation and reconstruction processes.
2. The infrared and visible image perception fusion method based on multi-scale structural decomposition according to claim 1, characterized in that:
in the step 1, multi-scale structural decomposition is performed on the infrared and visible light images of the same scene based on a scale-aware edge preservation (SAEP) filtering algorithm.
3. The infrared and visible image perception fusion method based on multi-scale structural decomposition according to claim 1, characterized in that:
in the step 2, the infrared and visible light multi-scale filtering images are obtainedAndconverting the image into a visual response space of an HVS (high Voltage stereo System), and obtaining the multi-scale perception contrast of the infrared imageAnd multi-scale perceptual contrast of visible light imagesThe specific method comprises the following steps:
wherein the content of the first and second substances,is the j-th layer low-pass image of the infrared image,is a jth layer bandpass image of the infrared image,is the jth layer low pass image of the visible image,the j-th layer bandpass image of the visible light image, t is an adaptive parameter, and is a set value, preferably, t is 1, α is an adjustment parameter, preferably, α is 0.8,
(2) calculating multi-scale perception contrast initial value of infrared and visible light imageAnd
wherein h is a threshold value and c is a constant;
(3) for the initial value of the multi-scale perception contrast obtained in the step (2)Andcarrying out noise and intensity saturation suppression to obtain the final multi-scale perception contrast of the infrared imageAnd multi-scale perceptual contrast of visible light images
The noise suppression method is as follows:
the strength was suppressed as follows:
r is an overexposure suppression parameter, I 0 A source image is represented.
4. The method of claim 3, wherein the method comprises:
in the step 3, the lowest layer low-pass image of the infrared and visible light images is subjected to image processingAndself-adaptive adjustment is carried out, fusion weight is determined based on a significance strategy, and an infrared and visible light bottom layer low-pass fusion image is obtainedThe method comprises the following steps:
(1) for the bottom low-pass image of infrared and visible light imagesAndperforming self-adaptive adjustment to obtain an adjusted bottommost layer low-pass imageAnd
wherein l is a threshold;
(2) determining the lowest layer low-pass image of visible light according to a significance fusion strategyFusion weight w of (c):
whereinWhich represents a gaussian filtering operation, is shown,andare saliency maps of the lowest low pass image of the visible and infrared images respectively,it is calculated as follows:
wherein the content of the first and second substances,andrespectively representing the gray values of a pixel point n and an adjacent pixel point k in an infrared image bottom layer image area omega,andrespectively representing the gray values of a pixel point n and an adjacent pixel point k in a bottom image region omega of the visible light image;
5. The method of claim 4, wherein the method comprises:
step 4, determining the multi-scale perception contrast of the infrared image by using a bidirectional significance aggregation strategyAnd multi-scale perceptual contrast of visible light imagesThe method for obtaining the fusion image of the j-th layer perception contrast of the infrared and visible light image comprises the following steps:
(1) will perceive contrastAndrespectively polymerizing from small scale to large scale to obtain the perception contrast of the j layerPixel level saliency ofAnd j-th layer perceived contrastPixel level saliency of
(2) Will perceive contrastAndrespectively polymerizing from large scale to small scale to obtain the perception contrast of the j layerStructural significance ofAnd j-th layer perceived contrastStructural significance of
Where sf is a structural significance function, which is calculated as follows:
gamma is a balance parameter, s 1 、s 2 Is obtained by the following formula:
wherein I x (X) and I y (X) respectively represent partial windows W i The gradients of the inner pixel points X in the X and y directions;
(3) computing the j-th layer perceived contrastOverall significance ofAnd j-th layer perceived contrastOverall significance of
Wherein beta is a balance parameter,a saliency map for layer j of an infrared image,the saliency adjustment map of the jth layer of the visible light image has the following values:
wherein the content of the first and second substances,in the form of a source infrared image,representing the average gray value of the infrared image in the neighborhood omega; sg denotes a sigmoid function which is,u is a control parameter;
(4) overall saliency from perceived contrast of layers of visible and infrared imagesAndthe fusion weight of each layer of perceived contrast of the visible light image is as follows:
6. The method of claim 5, wherein the method comprises:
in the step 5, the method for obtaining the final fusion image through the inverse transformation and reconstruction processes comprises the following steps:
wherein the content of the first and second substances,for the bottom-most low-pass fused image,resulting from the inverse transformation process in step 2:
7. The method of claim 6, wherein the method comprises:
h=0.5。
8. the method of claim 6, wherein the method comprises:
c=21.3。
9. the method of claim 6, wherein the method comprises:
in different scale layers, the values of p are different, when j is increased from small to large, when N is 4, the values of p are respectively: 1.40,1.15,1.04,1.15.
10. The method of claim 6, wherein the method comprises:
l=128,β=5,γ=0.1。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210381391.0A CN114897751A (en) | 2022-04-12 | 2022-04-12 | Infrared and visible light image perception fusion method based on multi-scale structural decomposition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210381391.0A CN114897751A (en) | 2022-04-12 | 2022-04-12 | Infrared and visible light image perception fusion method based on multi-scale structural decomposition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114897751A true CN114897751A (en) | 2022-08-12 |
Family
ID=82718480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210381391.0A Pending CN114897751A (en) | 2022-04-12 | 2022-04-12 | Infrared and visible light image perception fusion method based on multi-scale structural decomposition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897751A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115797244A (en) * | 2023-02-07 | 2023-03-14 | 中国科学院长春光学精密机械与物理研究所 | Image fusion method based on multi-scale direction co-occurrence filter and intensity transmission |
-
2022
- 2022-04-12 CN CN202210381391.0A patent/CN114897751A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115797244A (en) * | 2023-02-07 | 2023-03-14 | 中国科学院长春光学精密机械与物理研究所 | Image fusion method based on multi-scale direction co-occurrence filter and intensity transmission |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111583123A (en) | Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information | |
CN101770639B (en) | Enhancement method of low-illumination image | |
CN104835130A (en) | Multi-exposure image fusion method | |
CN107274365A (en) | A kind of mine image intensification method based on unsharp masking and NSCT algorithms | |
Song et al. | High dynamic range infrared images detail enhancement based on local edge preserving filter | |
CN108921809B (en) | Multispectral and panchromatic image fusion method based on spatial frequency under integral principle | |
CN113222877B (en) | Infrared and visible light image fusion method and application thereof in airborne photoelectric video | |
CN104537678B (en) | A kind of method that cloud and mist is removed in the remote sensing images from single width | |
Li et al. | Underwater image high definition display using the multilayer perceptron and color feature-based SRCNN | |
Student | Study of image fusion-techniques method and applications | |
Gao et al. | Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering | |
Jian et al. | Infrared and visible image fusion based on deep decomposition network and saliency analysis | |
CN113298147B (en) | Image fusion method and device based on regional energy and intuitionistic fuzzy set | |
Karalı et al. | Adaptive image enhancement based on clustering of wavelet coefficients for infrared sea surveillance systems | |
CN107451986B (en) | Single infrared image enhancement method based on fusion technology | |
CN114897751A (en) | Infrared and visible light image perception fusion method based on multi-scale structural decomposition | |
CN110148083B (en) | Image fusion method based on rapid BEMD and deep learning | |
CN104240208A (en) | Uncooled infrared focal plane detector image detail enhancement method | |
CN111815550B (en) | Infrared and visible light image fusion method based on gray level co-occurrence matrix | |
CN116309233A (en) | Infrared and visible light image fusion method based on night vision enhancement | |
CN114066786A (en) | Infrared and visible light image fusion method based on sparsity and filter | |
Thai et al. | Performance evaluation of high dynamic range image tone mapping operators based on separable non-linear multiresolution families | |
CN116597146A (en) | Semantic segmentation method for laser radar sparse point cloud data | |
CN110992287A (en) | Method for clarifying non-uniform illumination video | |
CN110689510A (en) | Sparse representation-based image fusion method introducing dictionary information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |