CN112184606A - Fusion method of visible light image and infrared image based on Laplacian pyramid - Google Patents
Fusion method of visible light image and infrared image based on Laplacian pyramid Download PDFInfo
- Publication number
- CN112184606A CN112184606A CN202011016322.7A CN202011016322A CN112184606A CN 112184606 A CN112184606 A CN 112184606A CN 202011016322 A CN202011016322 A CN 202011016322A CN 112184606 A CN112184606 A CN 112184606A
- Authority
- CN
- China
- Prior art keywords
- image
- sharpened
- fusion
- visible light
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 13
- 230000004927 fusion Effects 0.000 claims abstract description 123
- 238000000034 method Methods 0.000 claims abstract description 31
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims abstract description 13
- 230000000877 morphologic effect Effects 0.000 claims abstract description 10
- 239000013256 coordination polymer Substances 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 8
- 230000007797 corrosion Effects 0.000 claims description 4
- 238000005260 corrosion Methods 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000009466 transformation Effects 0.000 abstract description 5
- 238000007781 pre-processing Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000010339 dilation Effects 0.000 description 2
- 230000003628 erosive effect Effects 0.000 description 2
- 238000011426 transformation method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006740 morphological transformation Effects 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The fusion method of the visible light image and the infrared image based on the Laplace pyramid comprises the steps of firstly preprocessing the visible light image and the infrared image, carrying out Laplace sharpening processing to obtain corresponding focused sharpened images, respectively obtaining information entropies of the obtained focused sharpened images, and carrying out primary fusion; performing Laplace decomposition, performing cross entropy calculation on the low-frequency domain image, determining a weighted fusion coefficient of the low-frequency domain image to obtain a low-frequency domain fusion image, then calculating a pixel value, taking the larger pixel absolute value as the pixel value of the corresponding fusion layer image at the corresponding point, and performing inverse Laplace transformation to obtain a reconstructed fusion image; finally, performing morphological gradient processing, and performing secondary fusion to obtain a final fusion result image; according to the method for the visible light image and the infrared image, disclosed by the invention, the fusion result contains less noise and retains more structural information and edge information of the source image.
Description
Technical Field
The invention relates to the field of image processing, in particular to a fusion method of a visible light image and an infrared image based on a Laplacian pyramid.
Background
Infrared and visible image fusion is a frequently occurring requirement-application in image fusion, and the fusion method of this work is widely used in many applications. These algorithms combine significant features to synthesize a source image into a single image. Fusing images this method is used in a variety of computer vision tasks.
For decades, signal processing algorithms have been, at best, the feature extraction tools in image fusion tasks. A method and the like based on two-scale decomposition and significance detection are provided for fusion, a base layer and a detail layer are respectively extracted by a mean filter and a median filter, visually significant features are used for obtaining a weight map, and then a re-fusion image is constructed by combining the three parts.
In recent years, a fusion method based on representation learning has attracted great attention and exhibits the latest fusion performance, and in a sparse representation domain, a new method of medical image fusion of sparse representation is proposed, and a sub-dictionary is a feature learned by oriented gradient histograms. Then through l1Norm and max reconstruct the fused image selection strategy. In addition, joint sparse representation, common sparse representation, Pulse Coupled Neural Network (PCNN) shear wave transformation and shear wave transformation are also applied to image fusion, merging sparse representations. In other presentation learning fields, rank representation (LRR) is applied to image fusion tasks and the like. Many scholars also use the HOG and dictionary learning methods to obtain a global dictionary, then use the dictionary in the LRR, and the fused low rank coefficients are obtained by using l1Norm and select maximum strategy. Finally, a global dictionary and LRR are used. An efficient and simple underlying low rank representation (LatLRR) is also proposed for infrared and visible image fusion. The source image is decomposed into low-frequency and high-frequency coefficients LatLRR and a fused image by using a weighted average strategy. However, in view of the fusion result of the visible light image and the infrared image, there are still problems of much noise, and lack of structure and edge information.
Disclosure of Invention
The main purpose of the present invention is to overcome the above mentioned drawbacks of the prior art and to provide a method for fusing visible light images and infrared images, which contains less noise and retains more structural information and edge information of the source image.
The invention adopts the following technical scheme:
a fusion method of a visible light image and an infrared image based on a Laplace pyramid is characterized by comprising the following steps:
step S1: carrying out pixel level registration and graying processing on the visible light image and the infrared image to obtain a visible light gray image and an infrared gray image;
step S2: respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image;
step S3: respectively obtaining information entropies of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image;
step S4, respectively carrying out Laplace decomposition on the two sharpened images obtained in the step S2 and the primary fused image obtained in the step S3, and decomposing the images into a plurality of layers of sub-images;
step S5, respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; the method comprises the following specific steps:
in step S51, the gray-scale distributions of the source image and the fused image are set to p1 ═ { p10,p11,...,p1i,...,p1L-1Q1 ═ q10,q11,...,q1i,...,q1L-1And then the cross entropy is defined as:
where i represents the gray level of the image, p1iQ1 for the ratio of the number of pixels in the source image with a gray value equal to i to the total number of pixels in the imageiThe ratio of the number of pixels with the gray scale value equal to i in the fused image to the total number of pixels of the image, and L represents the maximum gray scale level;
respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image according to the formula;
step S52, setting the fusion coefficients of the low-frequency domain image obtained by decomposing the visible light focusing sharpened image and the infrared focusing sharpened image as alpha respectively2、β2Then there is
Of formula (II) to (III)'vis、CE'infRespectively representing cross entropy values obtained by resolving the visible light and the infrared focusing sharpened image and resolving the primary fusion image;
step S53, performing weighted fusion on the visible light focusing sharpened image and the low-frequency domain image obtained by decomposing the infrared focusing sharpened image, and obtaining a low-frequency domain fused image:
lowfre=F′vislow*α2+F′inflow*β2
of formula (II) F'vislow、F′inflowRespectively representing low-frequency domain images obtained by decomposing visible light and infrared focusing sharpened images;
step S6, comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer images obtained by decomposing the primary fused image, and taking the larger absolute value of the pixel as the pixel value of the corresponding fused layer image at the corresponding point;
step S7, performing inverse laplacian transform on the fused image sequence composed in step S5 and step S6 to obtain a reconstructed fused image;
and step S8, performing morphological gradient processing on the two sharpened images obtained in the step S2, and performing secondary fusion on the two processed images and the fused image obtained in the step S7 to obtain a final fused result image.
Preferably, the step of performing laplacian sharpening on the visible light focusing gray-scale image and the infrared focusing gray-scale image in step S2 includes:
the change of the binary image Laplacian operator is expressed and superposed to the original pixel, namely the difference value processing is carried out on the original image and the image after the Laplacian filtering, and the template operator is as follows:
preferably, the specific process of acquiring the primary fusion image in step 3 is as follows:
step S31, obtaining information entropy of visible light focusing sharpened image and infrared focusing sharpened image
The calculation method of the information entropy E comprises the following steps:
wherein i represents the gray-scale value of the image, piThe ratio of the number of pixels with the gray value equal to i to the total number of pixels of the image, and L represents the maximum gray level;
respectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2 according to the formula;
step S32, determining the weighted fusion coefficient of the primary fusion image, wherein the fusion coefficients of the visible light focusing sharpened image and the infrared focusing sharpened image are respectively set as alpha and beta, and if the fusion coefficients are alpha and beta
Of formula (II) to'vis、E'infRespectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image;
step S33, obtaining the primary fused image, and performing weighted fusion on the visible light focus sharpened image and the infrared focus sharpened image to obtain a primary fused image of firstfusion ═ F'vis*α+F′infBeta is F'vis、F′infVisible light focusing sharpened images and infrared focusing sharpened images are respectively.
Preferably, the step of performing laplacian decomposition on the two sharpened images and the primary fused image in step S4 is:
step S41, establishing the Gaussian tower shape decomposition of the image as G0In the order of G0As the zeroth layer image of the Gaussian pyramid, the l-1 layer image G of the Gaussian pyramid is takenl-1Convolving with a window function omega (m, n) with low-pass characteristic, and then performing interlaced alternate column downsampling on the convolution result to obtain the first layer image of the Gaussian pyramid, namely
In the formula, N is the layer number of the top layer of the Gaussian pyramid; cl、RlRespectively representing the column number and the row number of the first layer image of the Gaussian pyramid; ω (m, n) is a two-dimensionally separable window function of size 5 × 5, expressed as:
from G0、G1、…、GNForming a Gaussian pyramid, wherein the total layer number of the pyramid is N + 1;
step S42, establishing Laplacian decomposition of the image to generate a pyramid of Gaussian GlInterpolating and amplifying to obtain the sum Gl-1Magnified images of the same sizeNamely, it is
In the formula
The i-th layer image CP of the contrast pyramidlCan be expressed as
I is an identity matrix;
from CP0、CP1、…、CPNI.e. constituting a contrast pyramid.
Preferably, the other layer image fusion step in step S6 is:
comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fused image, taking the pixel with the largest absolute value as the pixel value of the corresponding fused layer image at the corresponding point, and expressing the pixel value selection rule as follows:
F′others(x,y)=max(|F′visothers(x,y)|,|F′infothers(x,y)|,|F′firstfusionothers(x,y)|)
of formula (II) F'othersRepresents a decomposition layer fusion result image, F ', of a layer other than the top layer'visothers、F′infothers、 F′firstfusionothersAnd the image data respectively represent a corresponding layer image obtained by decomposing a visible light focusing sharpened image, a corresponding layer image obtained by decomposing an infrared focusing sharpened image and a corresponding layer image obtained by decomposing a primary fusion image, and (x, y) is the coordinate position of an image pixel point.
Preferably, the specific process of image reconstruction in step S7 is as follows:
the contrast pyramid formula of each layer can be obtained as follows:
reconstructing the decomposed original image G according to the above formula layer-by-layer recursion0。
Preferably, the step of obtaining the final fusion result image through the second fusion in step 8 is:
step S81, performing morphological gradient processing on the visible light and infrared focusing sharpened images respectively by using the following formula:
Gradient(F)=Dilate(F)-Erode(F)
wherein F is the original input image, scale (F) is the expansion operation function, and Erode (F) is the corrosion operation function;
step S82, performing secondary fusion, and if the fused image reconstructed in step S7 is FR1, the final fusion result image fresh is:
FResult=FR1+Gradient(F′vis)+Gradient(F′inf)
as can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
the invention provides a visible light image and infrared image fusion method, which comprises the steps of carrying out pixel level registration and graying processing on a visible light image and an infrared image to obtain a visible light gray image and an infrared gray image; respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image; respectively obtaining information entropies of the obtained visible light focusing sharpened image and the infrared focusing sharpened image, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image; respectively carrying out Laplace decomposition on the two obtained sharpened images and the primary fused image, and decomposing the images into a plurality of layers of sub-images; respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; comparing pixel values of corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fusion image, and taking the larger pixel absolute value as the pixel value of the corresponding fusion layer image at the corresponding point; performing inverse Laplace transform on the formed fusion image sequence to obtain a reconstructed fusion image; performing morphological gradient processing on the two obtained sharpened images respectively, and performing secondary fusion on the two processed sharpened images and the obtained fusion image to obtain a final fusion result image; according to the method for the visible light image and the infrared image, disclosed by the invention, the fusion result contains less noise and retains more structural information and edge information of the source image.
Drawings
FIG. 1 is a flow chart of a method of the present invention providing a preferred embodiment;
FIG. 2 is an artwork of an embodiment of the present invention, (a) a visible light image, (b) an infrared image;
FIG. 3 is a graph showing the results of experimental verification graphs of the present invention, and FIG. 3(a) is a simple average fusion result image; fig. 3(b) is a conventional laplacian pyramid transform fusion result image; FIG. 3(c) is a conventional contrast pyramid transformation fusion result image; FIG. 3(d) is a conventional gradient pyramid transform fusion result image; FIG. 3(e) is a conventional morphological pyramid transformation fusion result image; FIG. 3(f) is a wavelet transform fusion result image; FIG. 3(g) is a diagram showing the final fusion effect of the present invention.
Detailed Description
The invention is further described below by means of specific embodiments.
The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Likewise, the word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Flow charts are used in this disclosure to illustrate steps of methods according to embodiments of the disclosure. It should be understood that the preceding and following steps are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The technical scheme for solving the technical problems is as follows:
fig. 1 is a general flowchart of a fusion method of a visible light image and an infrared image based on a laplacian pyramid according to the present invention, and the following detailed description will be given to an embodiment of the present invention with reference to the accompanying drawings and example diagrams, including the following steps:
step S10: carrying out pixel level registration and graying processing on the visible light image and the infrared image to obtain a visible light gray image and an infrared gray image;
step S20: respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image;
step S30: respectively obtaining information entropies of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image;
step S40, respectively carrying out Laplace decomposition on the two sharpened images obtained in the step S2 and the primary fused image obtained in the step S3, and decomposing the images into a plurality of layers of sub-images;
step S50, respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; the method comprises the following specific steps:
in step S501, the gray-scale distributions of the source image and the fused image are set to p1 ═ p10,p11,...,p1i,...,p1L-1Q1 ═ q10,q11,...,q1i,...,q1L-1And then the cross entropy is defined as:
where i represents the gray level of the image, p1iQ1 for the ratio of the number of pixels in the source image with a gray value equal to i to the total number of pixels in the imageiThe ratio of the number of pixels with the gray scale value equal to i in the fused image to the total number of pixels of the image, and L represents the maximum gray scale level;
respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image according to the formula;
step S502, setting the fusion coefficients of the visible light focusing sharpened image and the low-frequency domain image obtained by decomposing the infrared focusing sharpened image as alpha respectively2、β2Then there is
Of formula (II) to (III)'vis、CE'infRespectively representing cross entropy values obtained by resolving the visible light and the infrared focusing sharpened image and resolving the primary fusion image;
step S503, performing weighted fusion on the visible light focusing sharpened image and the low frequency domain image obtained by decomposing the infrared focusing sharpened image, and obtaining a low frequency domain fused image:
lowfre=F′vislow*α2+F′inflow*β2
of formula (II) F'vislow、F′inflowRespectively representing low-frequency domain images obtained by decomposing visible light and infrared focusing sharpened images;
step S60, comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer images obtained by decomposing the primary fused image, and taking the larger absolute value of the pixel as the pixel value of the corresponding fused layer image at the corresponding point;
step S70, performing inverse laplacian transform on the fused image sequence composed in step S5 and step S6 to obtain a reconstructed fused image;
and step S80, performing morphological gradient processing on the two sharpened images obtained in the step S2, and performing secondary fusion on the two processed images and the fused image obtained in the step S7 to obtain a final fused result image.
In step S20, the steps of performing laplacian sharpening on the visible light focusing gray image and the infrared focusing gray image respectively are as follows:
the change of the binary image Laplacian operator is expressed and superposed to the original pixel, namely the difference value processing is carried out on the original image and the image after the Laplacian filtering, and the template operator is as follows:
the specific process of acquiring the primary fusion image in step 30 is as follows:
step S301, obtaining information entropies of visible light focusing sharpened image and infrared focusing sharpened image
The calculation method of the information entropy E comprises the following steps:
wherein i represents the gray-scale value of the image, piThe ratio of the number of pixels with the gray value equal to i to the total number of pixels of the image, and L represents the maximum gray level;
respectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S20 according to the formula;
step S302, determining that the weighted fusion coefficients of the primary fusion image are respectively set as alpha and beta, and determining that the weighted fusion coefficients of the visible light focusing sharpened image and the infrared focusing sharpened image have
Of formula (II) to'vis、E'infRespectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image;
step S303, obtaining the primary fused image, and performing weighted fusion on the visible light focus sharpened image and the infrared focus sharpened image to obtain a primary fused image of firstfusion ═ F'vis*α+F′infBeta is F'vis、F′infVisible light focusing sharpened images and infrared focusing sharpened images are respectively.
The laplacian decomposition of the two sharpened images and the primary fused image in step S40 includes:
step S401, establishing a Gaussian tower-shaped decomposition of the image as G0In the order of G0As the zeroth layer image of the Gaussian pyramid, the l-1 layer image G of the Gaussian pyramid is takenl-1Convolving with a window function omega (m, n) with low-pass characteristic, and then performing interlaced alternate column downsampling on the convolution result to obtain the first layer image of the Gaussian pyramid, namely
In the formula, N is the layer number of the top layer of the Gaussian pyramid; cl、RlRespectively representing the column number and the row number of the first layer image of the Gaussian pyramid; ω (m, n) is a two-dimensionally separable window function of size 5 × 5, expressed as:
from G0、G1、…、GNForming a Gaussian pyramid, wherein the total layer number of the pyramid is N + 1;
step S402, establishing Laplacian decomposition of the image and converting the Gaussian pyramid image GlInterpolating and amplifying to obtain the sum Gl-1Magnified images of the same sizeNamely, it is
In the formula
Then contrast pyramidLayer I image CP of towerlCan be expressed as
I is an identity matrix;
from CP0、CP1、…、CPNI.e. constituting a contrast pyramid.
The other layer image fusion step in step S60 is:
comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fused image, taking the pixel with the largest absolute value as the pixel value of the corresponding fused layer image at the corresponding point, and expressing the pixel value selection rule as follows:
F′others(x,y)=max(|F′visothers(x,y)|,|F′infothers(x,y)|,|F′firstfusionothers(x,y)|)
of formula (II) F'othersRepresents a decomposition layer fusion result image, F ', of a layer other than the top layer'visothers、F′infothers、 F′firstfusionothersAnd the image data respectively represent a corresponding layer image obtained by decomposing a visible light focusing sharpened image, a corresponding layer image obtained by decomposing an infrared focusing sharpened image and a corresponding layer image obtained by decomposing a primary fusion image, and (x, y) is the coordinate position of an image pixel point.
The specific process of image reconstruction in step S70 is:
the contrast pyramid formula of each layer can be obtained as follows:
reconstructing the decomposed original image G according to the above formula layer-by-layer recursion0。
The step 80 of obtaining the final fusion result image through the secondary fusion comprises the following steps:
step S801, performing morphological gradient processing on the visible light and infrared focusing sharpened images respectively by using the following formula:
Gradient(F)=Dilate(F)-Erode(F)
wherein F is the original input image, scale (F) is the expansion operation function, and Erode (F) is the corrosion operation function;
the basic morphological transformations are dilation and erosion, which can be used to eliminate noise, segment individual picture elements, connect adjacent picture elements, etc. Dilation is the process of finding local pixel maxima, which causes the object boundary to expand outward; erosion is the minimum of pixels in the computed kernel region, which removes edge points contained in the connected component, shrinking the edges inward. Setting an original input image as F (x, y), and setting the selected structural elements as S (u, v), wherein (x, y) is the coordinate position of an image pixel point, and (u, v) is the coordinate position of a structural point; assuming D F and D S are the F and S domains, respectively, there is an inflation operation, which is recorded as
Corrosion operation, expressed as Θ
Erode(F)=(FΘS)(u,v)=min[F(u+x,v+y)-S(x,y)|(u+x),(v+y)∈DF;(x,y)∈DS]
Step S802, performing secondary fusion, and if the fused image reconstructed in step S70 is FR1, the final fusion result image fresh is:
FResult=FR1+Gradient(F′vis)+Gradient(F′inf)
the following is illustrated by way of specific examples:
the fusion result obtained by the methods is subjected to objective angle analysis by utilizing image quality evaluation factors, evaluation factor values of the fusion result image are obtained from the three aspects of image definition, image contained information quantity and image statistical characteristics, wherein the average gradient and the spatial frequency represent the image definition degree, the information entropy represents the quantity of the image contained information quantity, the standard deviation reflects the discrete degree of the gray scale relative to the gray scale mean value, and the larger the evaluation parameters are, the better the fusion effect is.
TABLE 1 Objective evaluation index results of several fusion methods
As can be seen from the observation and analysis of the data calculation results in Table 1, the fusion result image obtained by the method of the present invention has the maximum average gradient, spatial frequency, information entropy and standard deviation values, and is obviously greater than other comparative fusion methods, so that the method has the characteristics of clearest, maximum information amount and best pixel distribution characteristics.
As shown in fig. 3(a) - (f), it can be seen from the visual angle that the fusion results obtained by observing and analyzing various methods are the best in effect of the method of the present invention, regardless of the comprehensive abundance of the content of the image, or the contrast of the image scene, or the edge and texture of the target object, and other detailed information, and the multi-focus fusion images obtained by simple average fusion, several pyramid transformation methods and wavelet transformation methods under the conventional methods all show the phenomena of loss of detailed information such as the texture of the image scene, blurring of the target edge and contour, and poor contrast, while the images obtained by the present invention have the characteristics of clear scene, prominent edge contour, clear detailed information, strong contrast, and accurate and comprehensive information. Good visual observation effect is formed, and the understanding of people to the target scene is effectively improved.
The invention provides a visible light image and infrared image fusion method, which comprises the steps of carrying out pixel level registration and graying processing on a visible light image and an infrared image to obtain a visible light gray image and an infrared gray image; respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image; respectively obtaining information entropies of the obtained visible light focusing sharpened image and the infrared focusing sharpened image, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image; respectively performing laplacian decomposition on the two obtained sharpened images and the primary fused image obtained in the step S3, and decomposing the image into a plurality of layers of sub-images; respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; comparing pixel values of corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fusion image, and taking the larger pixel absolute value as the pixel value of the corresponding fusion layer image at the corresponding point; performing inverse Laplace transform on the formed fusion image sequence to obtain a reconstructed fusion image; respectively carrying out morphological gradient processing on the two obtained sharpened images, and carrying out secondary fusion on the two processed images and the obtained fusion image to obtain a final fusion result image
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.
Claims (7)
1. A fusion method of a visible light image and an infrared image based on a Laplace pyramid is characterized by comprising the following steps:
step S1: carrying out pixel level registration and graying processing on the visible light image and the infrared image to obtain a visible light gray image and an infrared gray image;
step S2: respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image;
step S3: respectively obtaining information entropies of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image;
step S4, respectively carrying out Laplace decomposition on the two sharpened images obtained in the step S2 and the primary fused image obtained in the step S3, and decomposing the images into a plurality of layers of sub-images;
step S5, respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; the method comprises the following specific steps:
in step S51, the gray-scale distributions of the source image and the fused image are set to p1 ═ { p10,p11,...,p1i,...,p1L-1Q1 ═ q10,q11,...,q1i,...,q1L-1And then the cross entropy is defined as:
where i represents the gray level of the image, p1iQ1 for the ratio of the number of pixels in the source image with a gray value equal to i to the total number of pixels in the imageiThe ratio of the number of pixels with the gray scale value equal to i in the fused image to the total number of pixels of the image, and L represents the maximum gray scale level;
respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image according to the formula;
step S52, setting visible light focusing sharpened image and infrared focusing sharpened imageThe fusion coefficients of the low-frequency images obtained by image decomposition are set to alpha2、β2Then there is
Of formula (II) to (III)'vis、CE'infRespectively representing cross entropy values obtained by resolving the visible light and the infrared focusing sharpened image and resolving the primary fusion image;
step S53, performing weighted fusion on the visible light focusing sharpened image and the low-frequency domain image obtained by decomposing the infrared focusing sharpened image, and obtaining a low-frequency domain fused image:
lowfre=F′vislow*α2+F′inflow*β2
of formula (II) F'vislow、F′inflowRespectively representing low-frequency domain images obtained by decomposing visible light and infrared focusing sharpened images;
step S6, comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer images obtained by decomposing the primary fused image, and taking the larger absolute value of the pixel as the pixel value of the corresponding fused layer image at the corresponding point;
step S7, performing inverse laplacian transform on the fused image sequence composed in step S5 and step S6 to obtain a reconstructed fused image;
and step S8, performing morphological gradient processing on the two sharpened images obtained in the step S2, and performing secondary fusion on the two processed images and the fused image obtained in the step S7 to obtain a final fused result image.
2. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 1, wherein the step of respectively performing laplacian sharpening on the visible light focusing gray image and the infrared focusing gray image in step S2 comprises:
the change of the binary image Laplacian operator is expressed and superposed to the original pixel, namely the difference value processing is carried out on the original image and the image after the Laplacian filtering, and the template operator is as follows:
3. the method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 1, wherein the specific process of obtaining the fused image in step 3 is as follows:
step S31, obtaining information entropy of visible light focusing sharpened image and infrared focusing sharpened image
The calculation method of the information entropy E comprises the following steps:
wherein i represents the gray-scale value of the image, piThe ratio of the number of pixels with the gray value equal to i to the total number of pixels of the image, and L represents the maximum gray level;
respectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2 according to the formula;
step S32, determining the weighted fusion coefficient of the primary fusion image, wherein the fusion coefficients of the visible light focusing sharpened image and the infrared focusing sharpened image are respectively set as alpha and beta, and if the fusion coefficients are alpha and beta
Of formula (II) to'vis、E'infRespectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image;
step S33, acquiring the primary fused image, and performing weighted fusion on the visible light focusing sharpened image and the infrared focusing sharpened image to obtain a primary fused image as F′vis*α+F′infBeta is F'vis、F′infVisible light focusing sharpened images and infrared focusing sharpened images are respectively.
4. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 1, wherein the step of performing laplacian decomposition on the two sharpened images and the primary fused image in step S4 is:
step S41, establishing the Gaussian tower shape decomposition of the image as G0In the order of G0As the zeroth layer image of the Gaussian pyramid, the l-1 layer image G of the Gaussian pyramid is takenl-1Convolving with a window function omega (m, n) with low-pass characteristic, and then performing interlaced alternate column downsampling on the convolution result to obtain the first layer image of the Gaussian pyramid, namely
In the formula, N is the layer number of the top layer of the Gaussian pyramid; cl、RlRespectively representing the column number and the row number of the first layer image of the Gaussian pyramid; ω (m, n) is a two-dimensionally separable window function of size 5 × 5, expressed as:
from G0、G1、…、GNForming a Gaussian pyramid, wherein the total layer number of the pyramid is N + 1;
step S42, establishing Laplacian decomposition of the image to generate a pyramid of Gaussian GlInterpolating and amplifying to obtain the sum Gl-1Magnified images of the same sizeNamely, it is
In the formula
The i-th layer image CP of the contrast pyramidlCan be expressed as
I is an identity matrix;
from CP0、CP1、…、CPNI.e. constituting a contrast pyramid.
5. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 1, wherein the other layer image fusing step in step S6 is:
comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fused image, taking the pixel with the largest absolute value as the pixel value of the corresponding fused layer image at the corresponding point, and expressing the pixel value selection rule as follows:
F′others(x,y)=max(|F′visothers(x,y)|,|F′infothers(x,y)|,|F′firstfusionothers(x,y)|)
of formula (II) F'othersRepresents a decomposition layer fusion result image, F ', of a layer other than the top layer'visothers、F′infothers、F′firstfusionothersAnd the image data respectively represent a corresponding layer image obtained by decomposing a visible light focusing sharpened image, a corresponding layer image obtained by decomposing an infrared focusing sharpened image and a corresponding layer image obtained by decomposing a primary fusion image, and (x, y) is the coordinate position of an image pixel point.
6. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 4, wherein the image reconstruction in step S7 specifically comprises:
the contrast pyramid formula of each layer can be obtained as follows:
reconstructing the decomposed original image G according to the above formula layer-by-layer recursion0。
7. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 3, wherein the step of obtaining the final fusion result image through the secondary fusion in step 8 comprises:
step S81, performing morphological gradient processing on the visible light and infrared focusing sharpened images respectively by using the following formula:
Gradient(F)=Dilate(F)-Erode(F)
wherein F is the original input image, scale (F) is the expansion operation function, and Erode (F) is the corrosion operation function;
step S82, performing secondary fusion, and if the fused image reconstructed in step S7 is FR1, the final fusion result image fresh is:
FResult=FR1+Gradient(F′vis)+Gradient(F′inf)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011016322.7A CN112184606A (en) | 2020-09-24 | 2020-09-24 | Fusion method of visible light image and infrared image based on Laplacian pyramid |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011016322.7A CN112184606A (en) | 2020-09-24 | 2020-09-24 | Fusion method of visible light image and infrared image based on Laplacian pyramid |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112184606A true CN112184606A (en) | 2021-01-05 |
Family
ID=73956180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011016322.7A Pending CN112184606A (en) | 2020-09-24 | 2020-09-24 | Fusion method of visible light image and infrared image based on Laplacian pyramid |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112184606A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113643219A (en) * | 2021-08-03 | 2021-11-12 | 武汉三江中电科技有限责任公司 | Image imaging method and device based on three-light fusion |
CN114494069A (en) * | 2022-01-28 | 2022-05-13 | 广州华多网络科技有限公司 | Image processing method, apparatus, device, medium, and product |
CN115442523A (en) * | 2022-08-17 | 2022-12-06 | 深圳昱拓智能有限公司 | Method, system, medium and device for acquiring high-definition full-field-depth image of inspection robot |
CN116152132A (en) * | 2023-04-19 | 2023-05-23 | 山东仕达思医疗科技有限公司 | Depth of field superposition method, device and equipment for microscope image |
CN116681633A (en) * | 2023-06-06 | 2023-09-01 | 国网上海市电力公司 | Multi-band imaging and fusion method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835130A (en) * | 2015-04-17 | 2015-08-12 | 北京联合大学 | Multi-exposure image fusion method |
CN106339998B (en) * | 2016-08-18 | 2019-11-15 | 南京理工大学 | Multi-focus image fusing method based on contrast pyramid transformation |
CN111429391A (en) * | 2020-03-23 | 2020-07-17 | 西安科技大学 | Infrared and visible light image fusion method, fusion system and application |
-
2020
- 2020-09-24 CN CN202011016322.7A patent/CN112184606A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835130A (en) * | 2015-04-17 | 2015-08-12 | 北京联合大学 | Multi-exposure image fusion method |
CN106339998B (en) * | 2016-08-18 | 2019-11-15 | 南京理工大学 | Multi-focus image fusing method based on contrast pyramid transformation |
CN111429391A (en) * | 2020-03-23 | 2020-07-17 | 西安科技大学 | Infrared and visible light image fusion method, fusion system and application |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113643219A (en) * | 2021-08-03 | 2021-11-12 | 武汉三江中电科技有限责任公司 | Image imaging method and device based on three-light fusion |
CN113643219B (en) * | 2021-08-03 | 2023-11-24 | 武汉三江中电科技有限责任公司 | Image imaging method and device based on three-light fusion |
CN114494069A (en) * | 2022-01-28 | 2022-05-13 | 广州华多网络科技有限公司 | Image processing method, apparatus, device, medium, and product |
CN115442523A (en) * | 2022-08-17 | 2022-12-06 | 深圳昱拓智能有限公司 | Method, system, medium and device for acquiring high-definition full-field-depth image of inspection robot |
CN115442523B (en) * | 2022-08-17 | 2023-09-05 | 深圳昱拓智能有限公司 | High-definition panoramic deep image acquisition method, system, medium and equipment of inspection robot |
CN116152132A (en) * | 2023-04-19 | 2023-05-23 | 山东仕达思医疗科技有限公司 | Depth of field superposition method, device and equipment for microscope image |
CN116152132B (en) * | 2023-04-19 | 2023-08-04 | 山东仕达思医疗科技有限公司 | Depth of field superposition method, device and equipment for microscope image |
CN116681633A (en) * | 2023-06-06 | 2023-09-01 | 国网上海市电力公司 | Multi-band imaging and fusion method |
CN116681633B (en) * | 2023-06-06 | 2024-04-12 | 国网上海市电力公司 | Multi-band imaging and fusion method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112184606A (en) | Fusion method of visible light image and infrared image based on Laplacian pyramid | |
Engin et al. | Cycle-dehaze: Enhanced cyclegan for single image dehazing | |
CN105957063B (en) | CT image liver segmentation method and system based on multiple dimensioned weighting similarity measure | |
CN106339998B (en) | Multi-focus image fusing method based on contrast pyramid transformation | |
CN109035172B (en) | Non-local mean ultrasonic image denoising method based on deep learning | |
CN105931226A (en) | Automatic cell detection and segmentation method based on deep learning and using adaptive ellipse fitting | |
EP2284795A2 (en) | Quantitative analysis, visualization and motion correction in dynamic processes | |
DE10048029A1 (en) | Procedure for calculating a transformation connecting two images | |
Luo | Pattern recognition and image processing | |
Li et al. | Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain | |
CN111179193B (en) | Dermatoscope image enhancement and classification method based on DCNNs and GANs | |
CN113191979B (en) | Non-local mean denoising method for partitioned SAR (synthetic aperture radar) image | |
DE102017220752A1 (en) | Image processing apparatus, image processing method and image processing program | |
Yan et al. | Improved mask R-CNN for lung nodule segmentation | |
CN111179173B (en) | Image splicing method based on discrete wavelet transform and gradient fusion algorithm | |
Gao et al. | Bayesian image super-resolution with deep modeling of image statistics | |
Dogra et al. | An efficient image integration algorithm for night mode vision applications | |
CN108985320B (en) | Multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition | |
Krishna et al. | Machine learning based de-noising of electron back scatter patterns of various crystallographic metallic materials fabricated using laser directed energy deposition | |
Hernandez et al. | Tactile imaging using watershed-based image segmentation | |
CN107945142B (en) | Synthetic aperture radar image denoising method | |
Wang et al. | New region-based image fusion scheme using the discrete wavelet frame transform | |
Kanchana et al. | Texture classification using discrete shearlet transform | |
Dash et al. | Gaussian pyramid based laws' mask descriptor for texture classification | |
Suneja et al. | Cloud based Medical Image De-Noising using Deep Convolution Neural Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210105 |
|
RJ01 | Rejection of invention patent application after publication |