CN107248150A - A kind of Multiscale image fusion methods extracted based on Steerable filter marking area - Google Patents

A kind of Multiscale image fusion methods extracted based on Steerable filter marking area Download PDF

Info

Publication number
CN107248150A
CN107248150A CN201710638504.XA CN201710638504A CN107248150A CN 107248150 A CN107248150 A CN 107248150A CN 201710638504 A CN201710638504 A CN 201710638504A CN 107248150 A CN107248150 A CN 107248150A
Authority
CN
China
Prior art keywords
msub
mrow
image
msubsup
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710638504.XA
Other languages
Chinese (zh)
Inventor
崔光茫
赵巨峰
公晓丽
辛青
逯鑫淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201710638504.XA priority Critical patent/CN107248150A/en
Publication of CN107248150A publication Critical patent/CN107248150A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of Multiscale image fusion methods extracted based on Steerable filter marking area, comprise the following steps:The visible images and infrared image of Same Scene are inputted, carries out Image Multiscale using non-down sampling contourlet transform and decomposes, image is divided into the details figure layer of some different scales;Calculate the Local standard deviation distribution map of each figure layer image;On the basis of Local standard deviation distribution map, and then calculating obtains corresponding binaryzation conspicuousness weight map;The acquisition of salient region information is realized based on Steerable filter device;With reference to salient region figure, image co-registration is carried out in each figure layer;Rebuild using weighted accumulation and obtain final fusion results.The present invention realizes effective multi-resolution decomposition, employ the salient region extraction algorithm based on Steerable filter, the marking area information of correspondence figure layer can effectively be extracted so that fusion results preferably retain the conspicuousness information of respective image source, with preferable vision syncretizing effect.

Description

Multi-scale image fusion method based on guided filtering salient region extraction
Technical Field
The invention relates to a computer image processing technology, in particular to a multi-scale infrared visible light image fusion method based on guided filtering salient region extraction.
Background
With the development of sensor technology, image sensors of different wave bands are widely applied, and the image fusion technology developed therewith also becomes a hotspot of research of people. The multiband image fusion can perform information fusion on images acquired by different image sensors in the same scene to obtain a fused image with richer information, and has important application in the imaging detection fields of military, civil use and the like.
The infrared visible light image fusion technology can combine the heat radiation target area information in the infrared image and the scene detail information in the visible light image, and simultaneously reserve the characteristic information in the two images in the fusion result. Researchers at home and abroad put forward a plurality of image fusion algorithms, which mainly comprise algorithms such as a multi-scale image decomposition tool, pyramid decomposition, principal component analysis and morphological top-hat transformation, and the like, and have good texture and contrast characteristics in fusion combination, so that important characteristics in visible light and infrared images are extracted. From the research dynamics of the fusion algorithm, how to effectively extract the significant feature information in the multi-source image and realize the fine fusion of the image detail information is a problem to be solved urgently by the infrared and visible light image fusion algorithm.
Disclosure of Invention
The invention provides a multi-scale image fusion method based on guided filtering salient region extraction, which utilizes non-subsampled contourlet transform (NSCT) to carry out multi-scale decomposition on an input infrared image and a visible light image, and combines a salient region extraction method based on guided filtering in different detail scale layers to carry out effective image fusion, thereby ensuring the retention of visual salient region information in the multi-scale decomposition images of each layer, and finally obtaining a fusion result with good visual enhancement effect through weighted reconstruction.
The invention provides a multi-scale infrared visible light image fusion method based on guided filtering saliency region extraction by using NSCT multi-scale image decomposition and guided filtering saliency region extraction methods, which mainly adopts the following steps:
1. by using the NSCT multi-scale decomposition tool, effective multi-scale decomposition is realized, the layering processing of fusion information from roughness to fineness is ensured, and the information richness of a fusion result is promoted. Meanwhile, by utilizing weighted accumulation reconstruction, fusion results of detail layers with different scales are reconstructed to obtain a final fusion image, and a better visual information enhancement effect can be obtained through reasonable weight value setting.
2. And the salient region extraction algorithm based on the guide filtering is adopted, so that the salient region information of the corresponding image layer can be effectively extracted. A binaryzation saliency region weight map is obtained by utilizing a designed algorithm and is used as an input image for guiding filtering operation, the map characterizes edges with large local standard deviation and detail-rich regions in an original image, and a filtering result can reflect the saliency characteristic of human vision. The original image is used as a guide image for guide filtering operation, edge salient information of [0,1] continuous distribution in the original image can be obtained, and salient image results of different scales from coarse to fine can be obtained by combining guide fuzzy parameter setting, so that salient region information of the multi-scale decomposition image of each image layer is better reserved.
A multi-scale image fusion method based on guide filtering salient region extraction comprises the following steps:
(1) and carrying out multi-scale image decomposition by utilizing non-downsampling contourlet transformation.
Inputting a visible light image f and an infrared image g of the same imaging scene, respectively carrying out multi-scale image decomposition on the visible light image f and the infrared image g, and obtaining decomposition image layers with different detail scales by utilizing non-subsampled Contourlet transform (NSCT), wherein the process is represented as follows:
fi=Multi_NSCT(f,i) (1)
gi=Multi_NSCT(g,i) (2)
wherein i ═ 1,2.. N, N denotes the number of decomposition layers of NSCT. Multi _ NSCT denotes an image Multi-scale decomposition framework using NSCT. f. ofiAnd giAnd respectively representing visible light and infrared detail layers with corresponding scales.
NSCT has good multi-scale and time-frequency local characteristics, and also has anisotropic multidirectional characteristics. The NSCT is utilized to carry out image multi-scale decomposition, and the decomposed image layers can better retain image edge information of different scales, so that the final effect is promoted. Meanwhile, the decomposition method has no down-sampling operation, each decomposition layer can keep the original resolution of the image, and the reconstruction process has no information loss caused by up-sampling.
(2) Local standard difference layout calculation.
And (2) calculating to obtain a local standard difference distribution diagram by adopting a local window traversal method for each layer decomposition image obtained in the step (1), wherein the process is represented as follows:
wherein W is a local window with the size of T × T, LocalStd is the calculation operation of the standard deviation of the local window image,andand respectively forming local standard difference layouts of visible light and infrared corresponding layers. The image region with the larger local variance has a larger pixel value in the distribution map.
(3) And acquiring a binary significance weight map.
And (3) comparing the infrared standard differential layout and the visible light standard differential layout of the same layer with the local standard differential layout obtained in the step (2), and combining image closing operation to obtain a binary significance weight map, wherein the process is represented as follows:
wherein,andthe pixel values of the visible and infrared map corresponding layers at pixel k, respectively. The selection criterion of binaryzation ensures that the significance weight graph can well reflect significance information of each wave band. Then, applying a closing operation in image morphology to obtain a final binarized saliency weight map:
wherein, the close () is the close operation in the image morphology processing,andrespectively obtaining the binary significance weight map of the corresponding map layer. Through image closing operation, a fine unconnected area in the significance weight area can be effectively eliminated, and a smoother significance area outline is obtained.
(4) And extracting the saliency region map based on the guide filtering.
For the binarization significance weight map obtained in the step (3), a result of significance region extraction is obtained by using guided filtering, and the process is represented as follows:
where GF () represents the guided filtering operation,andfor input images during the guided filtering process, f and g are guided filtering processesGuide diagram in the Process, riAnd muiRespectively the guiding filter size and the degree of blurring of the corresponding layer,andand respectively extracting the significance region extraction results of the visible light and infrared corresponding image layers.
In the process of extracting the salient region, a binarized saliency weight map is used as an input image, the map represents the edge with large local variance and detail-rich regions in an original image, the regions are the regions most interested and most concerned by human vision, and the process of guiding filtering is combined, so that the filtering result can reflect the salient characteristics of the human vision. The original image is used as a guide image, through guide filtering operation, compared with a binarized significant region weight image, edge significant information continuously distributed in [0,1] in the original image can be further obtained, meanwhile, different guide filtering fuzzy parameter settings are utilized, significant image results of different scales from rough to fine can be obtained, and significant region information of the multi-scale decomposition image of each image layer can be better reserved.
(5) Image fusion in combination with salient region extraction
And (4) carrying out image fusion processing on each scale layer by using the extraction result of the saliency region map obtained in the step (4), so that the fusion result can better retain the detail information of the saliency regions in different images, and the detailed description is as follows:
wherein M isiAnd showing the fusion result of each layer.
(6) Fused image weighted reconstruction
And (5) for the fusion result of each image layer obtained in the step (5), obtaining a final fusion result by utilizing weighted accumulation reconstruction, wherein the fusion image weighted reconstruction process is represented as follows:
wherein λ isiReconstructing a weight value, M, for each layer fusion resultfusionAnd obtaining an information enhanced fusion image result by setting a reasonable weight value for a final visible light infrared fusion result.
The invention has the beneficial effects that: aiming at the visible light and infrared image fusion technology, the invention utilizes non-subsampled contourlet transform NSCT to carry out multi-scale decomposition on the image, combines a salient region extraction method based on guide filtering in different detail scale layers to carry out effective image fusion, ensures the retention of visual salient region information in the multi-scale decomposition image of each layer, and finally obtains a fusion result with good visual enhancement effect through weighted reconstruction. In the invention, effective multi-scale image fusion can be implemented as long as the visible light image and the infrared image of the same scene are input, and a high-quality fusion result is obtained. The invention can be applied to the fields of remote sensing detection, military reconnaissance, safety monitoring, industrial production and the like.
Drawings
FIG. 1 is a flow chart of an algorithm;
FIG. 2(a) is an input infrared image;
FIG. 2(b) is a visible light image;
FIG. 3(a) is a partial standard difference layout of an infrared image;
FIG. 3(b) is a partial standard difference layout of a visible light image;
FIG. 4(a) is a binarized significance weight diagram of an infrared image;
fig. 4(b) is a binary significance weight map of a visible light image;
fig. 5(a) is a diagram of a salient region of an infrared image;
fig. 5(b) is a visible light image saliency region map;
fig. 6 shows the infrared-visible image fusion result.
Detailed Description
The technical solution of the present invention is described in detail and fully with reference to the accompanying drawings.
A flow chart of the method of the invention is shown in figure 1.
Fig. 2 is an example of a set of infrared-visible images of the same scene, where fig. 2(a) is an input infrared image and fig. 2(b) is an input visible image.
Fig. 3-5 illustrate the process of salient region map acquisition based on guided filtering. FIGS. 3(a) and (b) are local standard deviation layouts of an infrared image and a visible image, respectively, highlighting the scene region in the processed image with a large local standard deviation; fig. 4(a) and (b) are binarized saliency weight maps of an infrared image and a visible light image, respectively, reflecting human eye saliency information of the respective images; fig. 5(a) and (b) are saliency region maps of an infrared image and a visible light image, respectively, which reflect the current region of most interest to human vision, and are introduced into an image fusion framework, which is helpful for obtaining a fusion result with a better subjective visual effect.
In this embodiment, N is set to 4, that is, a NSCT multi-scale decomposition tool is used to perform multi-scale decomposition on an input image in 4 scales, so as to obtain 4 detail image layers from coarse to fine.
Subsequently, for each decomposition scale, the corresponding original infrared image and visible light image are used as guideAnd (5) carrying out salient region map extraction. In the calculation process of the local standard differential layout, the size of a local window W is set to be T-11; in the process of extracting the guide filtering saliency region map, the size of a guide filter and a fuzzy degree parameter are set as ri={10,7,7,7},μi={0.1,0.001,0.00001,0.000001}
(i ═ 1,2,3, 4). The finer the decomposition layer is, the smaller the corresponding guiding filtering blurring degree is, so as to better retain the detail information of the salient region.
By combining with the distribution of the saliency map, the infrared map and the visible light image of each scale are fused, so that the edge and the detail of the image can be well maintained, and the visual saliency area of human eyes is highlighted.
And finally, obtaining a final infrared and visible light fusion result through weighted reconstruction of the fusion result graphs of all scales. Selecting the reconstruction weighted value of the fusion result as lambdaiThe final fusion result is shown in fig. 6, which well retains the saliency region information of the infrared map and the visible map, and has excellent visual effect.

Claims (1)

1. A multi-scale image fusion method based on guide filtering salient region extraction is characterized by comprising the following steps:
(1) carrying out multi-scale image decomposition by utilizing non-subsampled contourlet transformation;
inputting a visible light image f and an infrared image g of the same imaging scene, respectively carrying out multi-scale image decomposition on the visible light image f and the infrared image g, and obtaining decomposition image layers with different detail scales by using NSCT decomposition, wherein the process is represented as follows:
fi=Multi_NSCT(f,i) (1)
gi=Multi_NSCT(g,i) (2)
wherein i ═ 1,2.. N, N denotes the number of decomposition layers of NSCT; multi _ NSCT denotes an image Multi-scale decomposition framework using NSCT; f. ofiAnd giRespectively representing visible light and infrared detail layers with corresponding scales; NSCT, i.e., non-subsampled contourlet decomposition;
(2) calculating a local standard difference layout;
and (2) calculating to obtain a local standard difference distribution diagram by adopting a local window traversal method for each layer decomposition image obtained in the step (1), wherein the process is represented as follows:
<mrow> <msub> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>L</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>S</mi> <mi>t</mi> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>L</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>S</mi> <mi>t</mi> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
wherein W is a local window with the size of T × T, LocalStd is the calculation operation of the standard deviation of the local window image,andrespectively distributing local standard difference layers of visible light and infrared corresponding layers;
(3) acquiring a binary significance weight graph;
and (3) comparing the infrared standard differential layout and the visible light standard differential layout of the same layer with the local standard differential layout obtained in the step (2), and combining image closing operation to obtain a binary significance weight map, wherein the process is represented as follows:
<mrow> <munder> <msubsup> <mi>P</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mrow> <mi>k</mi> <mo>&amp;Element;</mo> <msub> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> </mrow> </munder> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mi> </mi> <msubsup> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>=</mo> <mi>max</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <munder> <msubsup> <mi>P</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mrow> <mi>k</mi> <mo>&amp;Element;</mo> <msub> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> </mrow> </munder> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mi> </mi> <msubsup> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>=</mo> <mi>max</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
wherein,andpixel values of the visible-light and infrared-image corresponding layers at a pixel k are respectively; then, applying a closing operation in image morphology to obtain a final binarized saliency weight map:
<mrow> <msub> <mi>P</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>i</mi> <mi>m</mi> <mi>c</mi> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>e</mi> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>P</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>i</mi> <mi>m</mi> <mi>c</mi> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>e</mi> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
wherein, the close () is the close operation in the image morphology processing,andrespectively obtaining binarization significance weight graphs of corresponding image layers;
(4) extracting a salient region map based on guide filtering;
for the binarization significance weight map obtained in the step (3), a result of significance region extraction is obtained by using guided filtering, and the process is represented as follows:
<mrow> <msub> <mi>Map</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>G</mi> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>,</mo> <mi>f</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&amp;mu;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>Map</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>G</mi> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>,</mo> <mi>g</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&amp;mu;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>1
where GF () represents the guided filtering operation,andfor the input image during the guided filtering process, f and g are the guide maps during the guided filtering process, riAnd muiRespectively the guiding filter size and the degree of blurring of the corresponding layer,andrespectively extracting the significance region extraction results of the visible light and infrared corresponding image layers;
(5) image fusion in combination with salient region extraction
And (4) carrying out image fusion processing on each scale layer by using the extraction result of the saliency region map obtained in the step (4), so that the fusion result can better retain the detail information of the saliency regions in different images, and the detailed description is as follows:
<mrow> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>&amp;times;</mo> <msub> <mi>Map</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>+</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>Map</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>+</mo> <mo>&amp;lsqb;</mo> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>Map</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>&amp;times;</mo> <msub> <mi>Map</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </mfrac> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>...</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
wherein M isiRepresenting the fusion result of each layer;
(6) fused image weighted reconstruction
And (5) for the fusion result of each image layer obtained in the step (5), obtaining a final fusion result by utilizing weighted accumulation reconstruction, wherein the fusion image weighted reconstruction process is represented as follows:
<mrow> <msub> <mi>M</mi> <mrow> <mi>f</mi> <mi>u</mi> <mi>s</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
wherein λ isiReconstructing a weight value, M, for each layer fusion resultfusionAnd obtaining an information enhanced fusion image result by setting a reasonable weight value for a final visible light infrared fusion result.
CN201710638504.XA 2017-07-31 2017-07-31 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area Pending CN107248150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710638504.XA CN107248150A (en) 2017-07-31 2017-07-31 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710638504.XA CN107248150A (en) 2017-07-31 2017-07-31 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Publications (1)

Publication Number Publication Date
CN107248150A true CN107248150A (en) 2017-10-13

Family

ID=60013287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710638504.XA Pending CN107248150A (en) 2017-07-31 2017-07-31 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Country Status (1)

Country Link
CN (1) CN107248150A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977950A (en) * 2017-12-06 2018-05-01 上海交通大学 Rapid and effective video image fusion method based on multi-scale guide filtering
CN108364273A (en) * 2018-01-30 2018-08-03 中南大学 A kind of method of multi-focus image fusion under spatial domain
CN108830818A (en) * 2018-05-07 2018-11-16 西北工业大学 A kind of quick multi-focus image fusing method
CN109344699A (en) * 2018-08-22 2019-02-15 天津科技大学 Winter jujube disease recognition method based on depth of seam division convolutional neural networks
CN109360175A (en) * 2018-10-12 2019-02-19 云南大学 A kind of infrared image interfusion method with visible light
CN109754385A (en) * 2019-01-11 2019-05-14 中南大学 It is not registrated the rapid fusion method of multiple focussing image
CN109816617A (en) * 2018-12-06 2019-05-28 重庆邮电大学 Multimode medical image fusion method based on Steerable filter and graph theory conspicuousness
CN110009551A (en) * 2019-04-09 2019-07-12 浙江大学 A kind of real-time blood vessel Enhancement Method of CPUGPU collaboration processing
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110210541A (en) * 2019-05-23 2019-09-06 浙江大华技术股份有限公司 Image interfusion method and equipment, storage device
CN110211081A (en) * 2019-05-24 2019-09-06 南昌航空大学 A kind of multi-modality medical image fusion method based on image attributes and guiding filtering
CN110349117A (en) * 2019-06-28 2019-10-18 重庆工商大学 A kind of infrared image and visible light image fusion method, device and storage medium
CN110930311A (en) * 2018-09-19 2020-03-27 杭州萤石软件有限公司 Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system
CN111681243A (en) * 2020-08-17 2020-09-18 广东利元亨智能装备股份有限公司 Welding image processing method and device and electronic equipment
CN112132753A (en) * 2020-11-06 2020-12-25 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112837253A (en) * 2021-02-05 2021-05-25 中国人民解放军火箭军工程大学 Night infrared medium-long wave image fusion method and system
WO2021120406A1 (en) * 2019-12-17 2021-06-24 大连理工大学 Infrared and visible light fusion method based on saliency map enhancement
CN117745555A (en) * 2023-11-23 2024-03-22 广州市南沙区北科光子感知技术研究院 Fusion method of multi-scale infrared and visible light images based on double partial differential equation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)
US20140064636A1 (en) * 2007-11-29 2014-03-06 Sri International Multi-scale adaptive fusion with contrast normalization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064636A1 (en) * 2007-11-29 2014-03-06 Sri International Multi-scale adaptive fusion with contrast normalization
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许磊 等: "基于多尺度分解和显著性区域提取的可见光红外图像融合方法", 《激光与光电子学进展网络预出版HTTP://KNS.CNKI.NET/KCMS/DETAIL/31.1690.TN.20170623.1053.014.HTML》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977950A (en) * 2017-12-06 2018-05-01 上海交通大学 Rapid and effective video image fusion method based on multi-scale guide filtering
CN107977950B (en) * 2017-12-06 2021-06-04 上海交通大学 Rapid and effective video image fusion method based on multi-scale guide filtering
CN108364273A (en) * 2018-01-30 2018-08-03 中南大学 A kind of method of multi-focus image fusion under spatial domain
CN108364273B (en) * 2018-01-30 2022-02-25 中南大学 Method for multi-focus image fusion in spatial domain
CN108830818A (en) * 2018-05-07 2018-11-16 西北工业大学 A kind of quick multi-focus image fusing method
CN109344699A (en) * 2018-08-22 2019-02-15 天津科技大学 Winter jujube disease recognition method based on depth of seam division convolutional neural networks
CN110930311A (en) * 2018-09-19 2020-03-27 杭州萤石软件有限公司 Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion
CN109360175A (en) * 2018-10-12 2019-02-19 云南大学 A kind of infrared image interfusion method with visible light
CN109816617A (en) * 2018-12-06 2019-05-28 重庆邮电大学 Multimode medical image fusion method based on Steerable filter and graph theory conspicuousness
CN109816617B (en) * 2018-12-06 2023-05-26 重庆邮电大学 Multi-mode medical image fusion method based on guided filtering and graph theory significance
CN109754385A (en) * 2019-01-11 2019-05-14 中南大学 It is not registrated the rapid fusion method of multiple focussing image
CN110009551A (en) * 2019-04-09 2019-07-12 浙江大学 A kind of real-time blood vessel Enhancement Method of CPUGPU collaboration processing
CN110210541A (en) * 2019-05-23 2019-09-06 浙江大华技术股份有限公司 Image interfusion method and equipment, storage device
CN110211081A (en) * 2019-05-24 2019-09-06 南昌航空大学 A kind of multi-modality medical image fusion method based on image attributes and guiding filtering
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110211081B (en) * 2019-05-24 2023-05-16 南昌航空大学 Multimode medical image fusion method based on image attribute and guided filtering
CN110349117B (en) * 2019-06-28 2023-02-28 重庆工商大学 Infrared image and visible light image fusion method and device and storage medium
CN110349117A (en) * 2019-06-28 2019-10-18 重庆工商大学 A kind of infrared image and visible light image fusion method, device and storage medium
WO2021120406A1 (en) * 2019-12-17 2021-06-24 大连理工大学 Infrared and visible light fusion method based on saliency map enhancement
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system
CN111223069B (en) * 2020-01-14 2023-06-02 天津工业大学 Image fusion method and system
CN111681243A (en) * 2020-08-17 2020-09-18 广东利元亨智能装备股份有限公司 Welding image processing method and device and electronic equipment
CN112132753A (en) * 2020-11-06 2020-12-25 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112132753B (en) * 2020-11-06 2022-04-05 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112837253A (en) * 2021-02-05 2021-05-25 中国人民解放军火箭军工程大学 Night infrared medium-long wave image fusion method and system
CN117745555A (en) * 2023-11-23 2024-03-22 广州市南沙区北科光子感知技术研究院 Fusion method of multi-scale infrared and visible light images based on double partial differential equation

Similar Documents

Publication Publication Date Title
CN107248150A (en) A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN104809734B (en) A method of the infrared image based on guiding filtering and visual image fusion
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
CN109801250A (en) Infrared and visible light image fusion method based on ADC-SCM and low-rank matrix expression
CN102800074B (en) Synthetic aperture radar (SAR) image change detection difference chart generation method based on contourlet transform
CN104408700A (en) Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN103366353A (en) Infrared image and visible-light image fusion method based on saliency region segmentation
CN105335929A (en) Depth map super-resolution method
CN109064437A (en) Image fusion method based on guided filtering and online dictionary learning
Student Study of image fusion-techniques method and applications
CN106897987A (en) Image interfusion method based on translation invariant shearing wave and stack own coding
Kishore et al. Multi scale image fusion through Laplacian Pyramid and deep learning on thermal images
CN112163994A (en) Multi-scale medical image fusion method based on convolutional neural network
CN111815550A (en) Infrared and visible light image fusion method based on gray level co-occurrence matrix
Kalinovsky et al. Lesion detection in CT images using deep learning semantic segmentation technique
Yan et al. Adaptive fractional multi-scale edge-preserving decomposition and saliency detection fusion algorithm
Saini et al. Detector-segmentor network for skin lesion localization and segmentation
Patel et al. A review on infrared and visible image fusion techniques
CN117788296A (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
CN116703769B (en) Satellite remote sensing image full-color sharpening system
CN116543165B (en) Remote sensing image fruit tree segmentation method based on dual-channel composite depth network
Dong et al. Infrared and visible light image fusion via pixel mean shift and source image gradient
Jian et al. Towards reliable object representation via sparse directional patches and spatial center cues
CN104182955B (en) Image interfusion method based on steerable pyramid conversion and device thereof
Hao et al. MGFuse: An infrared and visible image fusion algorithm based on multiscale decomposition optimization and gradient-weighted local energy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171013

RJ01 Rejection of invention patent application after publication