CN112184606A - Fusion method of visible light image and infrared image based on Laplacian pyramid - Google Patents

Fusion method of visible light image and infrared image based on Laplacian pyramid Download PDF

Info

Publication number
CN112184606A
CN112184606A CN202011016322.7A CN202011016322A CN112184606A CN 112184606 A CN112184606 A CN 112184606A CN 202011016322 A CN202011016322 A CN 202011016322A CN 112184606 A CN112184606 A CN 112184606A
Authority
CN
China
Prior art keywords
image
sharpened
fusion
visible light
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011016322.7A
Other languages
Chinese (zh)
Inventor
仇飞
陈勐勐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xiaozhuang University
Original Assignee
Nanjing Xiaozhuang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xiaozhuang University filed Critical Nanjing Xiaozhuang University
Priority to CN202011016322.7A priority Critical patent/CN112184606A/en
Publication of CN112184606A publication Critical patent/CN112184606A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The fusion method of the visible light image and the infrared image based on the Laplace pyramid comprises the steps of firstly preprocessing the visible light image and the infrared image, carrying out Laplace sharpening processing to obtain corresponding focused sharpened images, respectively obtaining information entropies of the obtained focused sharpened images, and carrying out primary fusion; performing Laplace decomposition, performing cross entropy calculation on the low-frequency domain image, determining a weighted fusion coefficient of the low-frequency domain image to obtain a low-frequency domain fusion image, then calculating a pixel value, taking the larger pixel absolute value as the pixel value of the corresponding fusion layer image at the corresponding point, and performing inverse Laplace transformation to obtain a reconstructed fusion image; finally, performing morphological gradient processing, and performing secondary fusion to obtain a final fusion result image; according to the method for the visible light image and the infrared image, disclosed by the invention, the fusion result contains less noise and retains more structural information and edge information of the source image.

Description

Fusion method of visible light image and infrared image based on Laplacian pyramid
Technical Field
The invention relates to the field of image processing, in particular to a fusion method of a visible light image and an infrared image based on a Laplacian pyramid.
Background
Infrared and visible image fusion is a frequently occurring requirement-application in image fusion, and the fusion method of this work is widely used in many applications. These algorithms combine significant features to synthesize a source image into a single image. Fusing images this method is used in a variety of computer vision tasks.
For decades, signal processing algorithms have been, at best, the feature extraction tools in image fusion tasks. A method and the like based on two-scale decomposition and significance detection are provided for fusion, a base layer and a detail layer are respectively extracted by a mean filter and a median filter, visually significant features are used for obtaining a weight map, and then a re-fusion image is constructed by combining the three parts.
In recent years, a fusion method based on representation learning has attracted great attention and exhibits the latest fusion performance, and in a sparse representation domain, a new method of medical image fusion of sparse representation is proposed, and a sub-dictionary is a feature learned by oriented gradient histograms. Then through l1Norm and max reconstruct the fused image selection strategy. In addition, joint sparse representation, common sparse representation, Pulse Coupled Neural Network (PCNN) shear wave transformation and shear wave transformation are also applied to image fusion, merging sparse representations. In other presentation learning fields, rank representation (LRR) is applied to image fusion tasks and the like. Many scholars also use the HOG and dictionary learning methods to obtain a global dictionary, then use the dictionary in the LRR, and the fused low rank coefficients are obtained by using l1Norm and select maximum strategy. Finally, a global dictionary and LRR are used. An efficient and simple underlying low rank representation (LatLRR) is also proposed for infrared and visible image fusion. The source image is decomposed into low-frequency and high-frequency coefficients LatLRR and a fused image by using a weighted average strategy. However, in view of the fusion result of the visible light image and the infrared image, there are still problems of much noise, and lack of structure and edge information.
Disclosure of Invention
The main purpose of the present invention is to overcome the above mentioned drawbacks of the prior art and to provide a method for fusing visible light images and infrared images, which contains less noise and retains more structural information and edge information of the source image.
The invention adopts the following technical scheme:
a fusion method of a visible light image and an infrared image based on a Laplace pyramid is characterized by comprising the following steps:
step S1: carrying out pixel level registration and graying processing on the visible light image and the infrared image to obtain a visible light gray image and an infrared gray image;
step S2: respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image;
step S3: respectively obtaining information entropies of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image;
step S4, respectively carrying out Laplace decomposition on the two sharpened images obtained in the step S2 and the primary fused image obtained in the step S3, and decomposing the images into a plurality of layers of sub-images;
step S5, respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; the method comprises the following specific steps:
in step S51, the gray-scale distributions of the source image and the fused image are set to p1 ═ { p10,p11,...,p1i,...,p1L-1Q1 ═ q10,q11,...,q1i,...,q1L-1And then the cross entropy is defined as:
Figure BDA0002699189180000021
where i represents the gray level of the image, p1iQ1 for the ratio of the number of pixels in the source image with a gray value equal to i to the total number of pixels in the imageiThe ratio of the number of pixels with the gray scale value equal to i in the fused image to the total number of pixels of the image, and L represents the maximum gray scale level;
respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image according to the formula;
step S52, setting the fusion coefficients of the low-frequency domain image obtained by decomposing the visible light focusing sharpened image and the infrared focusing sharpened image as alpha respectively2、β2Then there is
Figure BDA0002699189180000031
Of formula (II) to (III)'vis、CE'infRespectively representing cross entropy values obtained by resolving the visible light and the infrared focusing sharpened image and resolving the primary fusion image;
step S53, performing weighted fusion on the visible light focusing sharpened image and the low-frequency domain image obtained by decomposing the infrared focusing sharpened image, and obtaining a low-frequency domain fused image:
lowfre=F′vislow2+F′inflow2
of formula (II) F'vislow、F′inflowRespectively representing low-frequency domain images obtained by decomposing visible light and infrared focusing sharpened images;
step S6, comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer images obtained by decomposing the primary fused image, and taking the larger absolute value of the pixel as the pixel value of the corresponding fused layer image at the corresponding point;
step S7, performing inverse laplacian transform on the fused image sequence composed in step S5 and step S6 to obtain a reconstructed fused image;
and step S8, performing morphological gradient processing on the two sharpened images obtained in the step S2, and performing secondary fusion on the two processed images and the fused image obtained in the step S7 to obtain a final fused result image.
Preferably, the step of performing laplacian sharpening on the visible light focusing gray-scale image and the infrared focusing gray-scale image in step S2 includes:
the change of the binary image Laplacian operator is expressed and superposed to the original pixel, namely the difference value processing is carried out on the original image and the image after the Laplacian filtering, and the template operator is as follows:
Figure BDA0002699189180000041
preferably, the specific process of acquiring the primary fusion image in step 3 is as follows:
step S31, obtaining information entropy of visible light focusing sharpened image and infrared focusing sharpened image
The calculation method of the information entropy E comprises the following steps:
Figure BDA0002699189180000042
wherein i represents the gray-scale value of the image, piThe ratio of the number of pixels with the gray value equal to i to the total number of pixels of the image, and L represents the maximum gray level;
respectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2 according to the formula;
step S32, determining the weighted fusion coefficient of the primary fusion image, wherein the fusion coefficients of the visible light focusing sharpened image and the infrared focusing sharpened image are respectively set as alpha and beta, and if the fusion coefficients are alpha and beta
Figure BDA0002699189180000043
Of formula (II) to'vis、E'infRespectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image;
step S33, obtaining the primary fused image, and performing weighted fusion on the visible light focus sharpened image and the infrared focus sharpened image to obtain a primary fused image of firstfusion ═ F'vis*α+F′infBeta is F'vis、F′infVisible light focusing sharpened images and infrared focusing sharpened images are respectively.
Preferably, the step of performing laplacian decomposition on the two sharpened images and the primary fused image in step S4 is:
step S41, establishing the Gaussian tower shape decomposition of the image as G0In the order of G0As the zeroth layer image of the Gaussian pyramid, the l-1 layer image G of the Gaussian pyramid is takenl-1Convolving with a window function omega (m, n) with low-pass characteristic, and then performing interlaced alternate column downsampling on the convolution result to obtain the first layer image of the Gaussian pyramid, namely
Figure BDA0002699189180000051
In the formula, N is the layer number of the top layer of the Gaussian pyramid; cl、RlRespectively representing the column number and the row number of the first layer image of the Gaussian pyramid; ω (m, n) is a two-dimensionally separable window function of size 5 × 5, expressed as:
Figure BDA0002699189180000052
from G0、G1、…、GNForming a Gaussian pyramid, wherein the total layer number of the pyramid is N + 1;
step S42, establishing Laplacian decomposition of the image to generate a pyramid of Gaussian GlInterpolating and amplifying to obtain the sum Gl-1Magnified images of the same size
Figure BDA0002699189180000053
Namely, it is
Figure BDA0002699189180000054
In the formula
Figure BDA0002699189180000061
The i-th layer image CP of the contrast pyramidlCan be expressed as
Figure BDA0002699189180000062
I is an identity matrix;
from CP0、CP1、…、CPNI.e. constituting a contrast pyramid.
Preferably, the other layer image fusion step in step S6 is:
comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fused image, taking the pixel with the largest absolute value as the pixel value of the corresponding fused layer image at the corresponding point, and expressing the pixel value selection rule as follows:
F′others(x,y)=max(|F′visothers(x,y)|,|F′infothers(x,y)|,|F′firstfusionothers(x,y)|)
of formula (II) F'othersRepresents a decomposition layer fusion result image, F ', of a layer other than the top layer'visothers、F′infothers、 F′firstfusionothersAnd the image data respectively represent a corresponding layer image obtained by decomposing a visible light focusing sharpened image, a corresponding layer image obtained by decomposing an infrared focusing sharpened image and a corresponding layer image obtained by decomposing a primary fusion image, and (x, y) is the coordinate position of an image pixel point.
Preferably, the specific process of image reconstruction in step S7 is as follows:
the contrast pyramid formula of each layer can be obtained as follows:
Figure BDA0002699189180000063
reconstructing the decomposed original image G according to the above formula layer-by-layer recursion0
Preferably, the step of obtaining the final fusion result image through the second fusion in step 8 is:
step S81, performing morphological gradient processing on the visible light and infrared focusing sharpened images respectively by using the following formula:
Gradient(F)=Dilate(F)-Erode(F)
wherein F is the original input image, scale (F) is the expansion operation function, and Erode (F) is the corrosion operation function;
step S82, performing secondary fusion, and if the fused image reconstructed in step S7 is FR1, the final fusion result image fresh is:
FResult=FR1+Gradient(F′vis)+Gradient(F′inf)
as can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
the invention provides a visible light image and infrared image fusion method, which comprises the steps of carrying out pixel level registration and graying processing on a visible light image and an infrared image to obtain a visible light gray image and an infrared gray image; respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image; respectively obtaining information entropies of the obtained visible light focusing sharpened image and the infrared focusing sharpened image, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image; respectively carrying out Laplace decomposition on the two obtained sharpened images and the primary fused image, and decomposing the images into a plurality of layers of sub-images; respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; comparing pixel values of corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fusion image, and taking the larger pixel absolute value as the pixel value of the corresponding fusion layer image at the corresponding point; performing inverse Laplace transform on the formed fusion image sequence to obtain a reconstructed fusion image; performing morphological gradient processing on the two obtained sharpened images respectively, and performing secondary fusion on the two processed sharpened images and the obtained fusion image to obtain a final fusion result image; according to the method for the visible light image and the infrared image, disclosed by the invention, the fusion result contains less noise and retains more structural information and edge information of the source image.
Drawings
FIG. 1 is a flow chart of a method of the present invention providing a preferred embodiment;
FIG. 2 is an artwork of an embodiment of the present invention, (a) a visible light image, (b) an infrared image;
FIG. 3 is a graph showing the results of experimental verification graphs of the present invention, and FIG. 3(a) is a simple average fusion result image; fig. 3(b) is a conventional laplacian pyramid transform fusion result image; FIG. 3(c) is a conventional contrast pyramid transformation fusion result image; FIG. 3(d) is a conventional gradient pyramid transform fusion result image; FIG. 3(e) is a conventional morphological pyramid transformation fusion result image; FIG. 3(f) is a wavelet transform fusion result image; FIG. 3(g) is a diagram showing the final fusion effect of the present invention.
Detailed Description
The invention is further described below by means of specific embodiments.
The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Likewise, the word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Flow charts are used in this disclosure to illustrate steps of methods according to embodiments of the disclosure. It should be understood that the preceding and following steps are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The technical scheme for solving the technical problems is as follows:
fig. 1 is a general flowchart of a fusion method of a visible light image and an infrared image based on a laplacian pyramid according to the present invention, and the following detailed description will be given to an embodiment of the present invention with reference to the accompanying drawings and example diagrams, including the following steps:
step S10: carrying out pixel level registration and graying processing on the visible light image and the infrared image to obtain a visible light gray image and an infrared gray image;
step S20: respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image;
step S30: respectively obtaining information entropies of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image;
step S40, respectively carrying out Laplace decomposition on the two sharpened images obtained in the step S2 and the primary fused image obtained in the step S3, and decomposing the images into a plurality of layers of sub-images;
step S50, respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; the method comprises the following specific steps:
in step S501, the gray-scale distributions of the source image and the fused image are set to p1 ═ p10,p11,...,p1i,...,p1L-1Q1 ═ q10,q11,...,q1i,...,q1L-1And then the cross entropy is defined as:
Figure BDA0002699189180000091
where i represents the gray level of the image, p1iQ1 for the ratio of the number of pixels in the source image with a gray value equal to i to the total number of pixels in the imageiThe ratio of the number of pixels with the gray scale value equal to i in the fused image to the total number of pixels of the image, and L represents the maximum gray scale level;
respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image according to the formula;
step S502, setting the fusion coefficients of the visible light focusing sharpened image and the low-frequency domain image obtained by decomposing the infrared focusing sharpened image as alpha respectively2、β2Then there is
Figure BDA0002699189180000101
Of formula (II) to (III)'vis、CE'infRespectively representing cross entropy values obtained by resolving the visible light and the infrared focusing sharpened image and resolving the primary fusion image;
step S503, performing weighted fusion on the visible light focusing sharpened image and the low frequency domain image obtained by decomposing the infrared focusing sharpened image, and obtaining a low frequency domain fused image:
lowfre=F′vislow2+F′inflow2
of formula (II) F'vislow、F′inflowRespectively representing low-frequency domain images obtained by decomposing visible light and infrared focusing sharpened images;
step S60, comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer images obtained by decomposing the primary fused image, and taking the larger absolute value of the pixel as the pixel value of the corresponding fused layer image at the corresponding point;
step S70, performing inverse laplacian transform on the fused image sequence composed in step S5 and step S6 to obtain a reconstructed fused image;
and step S80, performing morphological gradient processing on the two sharpened images obtained in the step S2, and performing secondary fusion on the two processed images and the fused image obtained in the step S7 to obtain a final fused result image.
In step S20, the steps of performing laplacian sharpening on the visible light focusing gray image and the infrared focusing gray image respectively are as follows:
the change of the binary image Laplacian operator is expressed and superposed to the original pixel, namely the difference value processing is carried out on the original image and the image after the Laplacian filtering, and the template operator is as follows:
Figure BDA0002699189180000111
the specific process of acquiring the primary fusion image in step 30 is as follows:
step S301, obtaining information entropies of visible light focusing sharpened image and infrared focusing sharpened image
The calculation method of the information entropy E comprises the following steps:
Figure BDA0002699189180000112
wherein i represents the gray-scale value of the image, piThe ratio of the number of pixels with the gray value equal to i to the total number of pixels of the image, and L represents the maximum gray level;
respectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S20 according to the formula;
step S302, determining that the weighted fusion coefficients of the primary fusion image are respectively set as alpha and beta, and determining that the weighted fusion coefficients of the visible light focusing sharpened image and the infrared focusing sharpened image have
Figure BDA0002699189180000113
Of formula (II) to'vis、E'infRespectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image;
step S303, obtaining the primary fused image, and performing weighted fusion on the visible light focus sharpened image and the infrared focus sharpened image to obtain a primary fused image of firstfusion ═ F'vis*α+F′infBeta is F'vis、F′infVisible light focusing sharpened images and infrared focusing sharpened images are respectively.
The laplacian decomposition of the two sharpened images and the primary fused image in step S40 includes:
step S401, establishing a Gaussian tower-shaped decomposition of the image as G0In the order of G0As the zeroth layer image of the Gaussian pyramid, the l-1 layer image G of the Gaussian pyramid is takenl-1Convolving with a window function omega (m, n) with low-pass characteristic, and then performing interlaced alternate column downsampling on the convolution result to obtain the first layer image of the Gaussian pyramid, namely
Figure BDA0002699189180000121
In the formula, N is the layer number of the top layer of the Gaussian pyramid; cl、RlRespectively representing the column number and the row number of the first layer image of the Gaussian pyramid; ω (m, n) is a two-dimensionally separable window function of size 5 × 5, expressed as:
Figure BDA0002699189180000122
from G0、G1、…、GNForming a Gaussian pyramid, wherein the total layer number of the pyramid is N + 1;
step S402, establishing Laplacian decomposition of the image and converting the Gaussian pyramid image GlInterpolating and amplifying to obtain the sum Gl-1Magnified images of the same size
Figure BDA0002699189180000123
Namely, it is
Figure BDA0002699189180000124
In the formula
Figure BDA0002699189180000125
Then contrast pyramidLayer I image CP of towerlCan be expressed as
Figure BDA0002699189180000126
I is an identity matrix;
from CP0、CP1、…、CPNI.e. constituting a contrast pyramid.
The other layer image fusion step in step S60 is:
comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fused image, taking the pixel with the largest absolute value as the pixel value of the corresponding fused layer image at the corresponding point, and expressing the pixel value selection rule as follows:
F′others(x,y)=max(|F′visothers(x,y)|,|F′infothers(x,y)|,|F′firstfusionothers(x,y)|)
of formula (II) F'othersRepresents a decomposition layer fusion result image, F ', of a layer other than the top layer'visothers、F′infothers、 F′firstfusionothersAnd the image data respectively represent a corresponding layer image obtained by decomposing a visible light focusing sharpened image, a corresponding layer image obtained by decomposing an infrared focusing sharpened image and a corresponding layer image obtained by decomposing a primary fusion image, and (x, y) is the coordinate position of an image pixel point.
The specific process of image reconstruction in step S70 is:
the contrast pyramid formula of each layer can be obtained as follows:
Figure BDA0002699189180000131
reconstructing the decomposed original image G according to the above formula layer-by-layer recursion0
The step 80 of obtaining the final fusion result image through the secondary fusion comprises the following steps:
step S801, performing morphological gradient processing on the visible light and infrared focusing sharpened images respectively by using the following formula:
Gradient(F)=Dilate(F)-Erode(F)
wherein F is the original input image, scale (F) is the expansion operation function, and Erode (F) is the corrosion operation function;
the basic morphological transformations are dilation and erosion, which can be used to eliminate noise, segment individual picture elements, connect adjacent picture elements, etc. Dilation is the process of finding local pixel maxima, which causes the object boundary to expand outward; erosion is the minimum of pixels in the computed kernel region, which removes edge points contained in the connected component, shrinking the edges inward. Setting an original input image as F (x, y), and setting the selected structural elements as S (u, v), wherein (x, y) is the coordinate position of an image pixel point, and (u, v) is the coordinate position of a structural point; assuming D F and D S are the F and S domains, respectively, there is an inflation operation, which is recorded as
Figure BDA0002699189180000141
Figure BDA0002699189180000142
Corrosion operation, expressed as Θ
Erode(F)=(FΘS)(u,v)=min[F(u+x,v+y)-S(x,y)|(u+x),(v+y)∈DF;(x,y)∈DS]
Step S802, performing secondary fusion, and if the fused image reconstructed in step S70 is FR1, the final fusion result image fresh is:
FResult=FR1+Gradient(F′vis)+Gradient(F′inf)
the following is illustrated by way of specific examples:
the fusion result obtained by the methods is subjected to objective angle analysis by utilizing image quality evaluation factors, evaluation factor values of the fusion result image are obtained from the three aspects of image definition, image contained information quantity and image statistical characteristics, wherein the average gradient and the spatial frequency represent the image definition degree, the information entropy represents the quantity of the image contained information quantity, the standard deviation reflects the discrete degree of the gray scale relative to the gray scale mean value, and the larger the evaluation parameters are, the better the fusion effect is.
TABLE 1 Objective evaluation index results of several fusion methods
Figure BDA0002699189180000143
Figure BDA0002699189180000151
As can be seen from the observation and analysis of the data calculation results in Table 1, the fusion result image obtained by the method of the present invention has the maximum average gradient, spatial frequency, information entropy and standard deviation values, and is obviously greater than other comparative fusion methods, so that the method has the characteristics of clearest, maximum information amount and best pixel distribution characteristics.
As shown in fig. 3(a) - (f), it can be seen from the visual angle that the fusion results obtained by observing and analyzing various methods are the best in effect of the method of the present invention, regardless of the comprehensive abundance of the content of the image, or the contrast of the image scene, or the edge and texture of the target object, and other detailed information, and the multi-focus fusion images obtained by simple average fusion, several pyramid transformation methods and wavelet transformation methods under the conventional methods all show the phenomena of loss of detailed information such as the texture of the image scene, blurring of the target edge and contour, and poor contrast, while the images obtained by the present invention have the characteristics of clear scene, prominent edge contour, clear detailed information, strong contrast, and accurate and comprehensive information. Good visual observation effect is formed, and the understanding of people to the target scene is effectively improved.
The invention provides a visible light image and infrared image fusion method, which comprises the steps of carrying out pixel level registration and graying processing on a visible light image and an infrared image to obtain a visible light gray image and an infrared gray image; respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image; respectively obtaining information entropies of the obtained visible light focusing sharpened image and the infrared focusing sharpened image, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image; respectively performing laplacian decomposition on the two obtained sharpened images and the primary fused image obtained in the step S3, and decomposing the image into a plurality of layers of sub-images; respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; comparing pixel values of corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fusion image, and taking the larger pixel absolute value as the pixel value of the corresponding fusion layer image at the corresponding point; performing inverse Laplace transform on the formed fusion image sequence to obtain a reconstructed fusion image; respectively carrying out morphological gradient processing on the two obtained sharpened images, and carrying out secondary fusion on the two processed images and the obtained fusion image to obtain a final fusion result image
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (7)

1. A fusion method of a visible light image and an infrared image based on a Laplace pyramid is characterized by comprising the following steps:
step S1: carrying out pixel level registration and graying processing on the visible light image and the infrared image to obtain a visible light gray image and an infrared gray image;
step S2: respectively carrying out Laplace sharpening on the visible light gray level image and the infrared gray level image to obtain a visible light focusing sharpened image and an infrared focusing sharpened image;
step S3: respectively obtaining information entropies of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2, and determining a weighted fusion coefficient according to the information entropies to obtain a primary fusion image;
step S4, respectively carrying out Laplace decomposition on the two sharpened images obtained in the step S2 and the primary fused image obtained in the step S3, and decomposing the images into a plurality of layers of sub-images;
step S5, respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image, determining a weighted fusion coefficient of the low-frequency domain image, and obtaining a fusion image of a low-frequency domain; the method comprises the following specific steps:
in step S51, the gray-scale distributions of the source image and the fused image are set to p1 ═ { p10,p11,...,p1i,...,p1L-1Q1 ═ q10,q11,...,q1i,...,q1L-1And then the cross entropy is defined as:
Figure FDA0002699189170000011
where i represents the gray level of the image, p1iQ1 for the ratio of the number of pixels in the source image with a gray value equal to i to the total number of pixels in the imageiThe ratio of the number of pixels with the gray scale value equal to i in the fused image to the total number of pixels of the image, and L represents the maximum gray scale level;
respectively carrying out cross entropy calculation on the low-frequency domain image obtained by decomposing the two sharpened images and the low-frequency domain image obtained by decomposing the primary fusion image according to the formula;
step S52, setting visible light focusing sharpened image and infrared focusing sharpened imageThe fusion coefficients of the low-frequency images obtained by image decomposition are set to alpha2、β2Then there is
Figure FDA0002699189170000021
Of formula (II) to (III)'vis、CE'infRespectively representing cross entropy values obtained by resolving the visible light and the infrared focusing sharpened image and resolving the primary fusion image;
step S53, performing weighted fusion on the visible light focusing sharpened image and the low-frequency domain image obtained by decomposing the infrared focusing sharpened image, and obtaining a low-frequency domain fused image:
lowfre=F′vislow2+F′inflow2
of formula (II) F'vislow、F′inflowRespectively representing low-frequency domain images obtained by decomposing visible light and infrared focusing sharpened images;
step S6, comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer images obtained by decomposing the primary fused image, and taking the larger absolute value of the pixel as the pixel value of the corresponding fused layer image at the corresponding point;
step S7, performing inverse laplacian transform on the fused image sequence composed in step S5 and step S6 to obtain a reconstructed fused image;
and step S8, performing morphological gradient processing on the two sharpened images obtained in the step S2, and performing secondary fusion on the two processed images and the fused image obtained in the step S7 to obtain a final fused result image.
2. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 1, wherein the step of respectively performing laplacian sharpening on the visible light focusing gray image and the infrared focusing gray image in step S2 comprises:
the change of the binary image Laplacian operator is expressed and superposed to the original pixel, namely the difference value processing is carried out on the original image and the image after the Laplacian filtering, and the template operator is as follows:
Figure FDA0002699189170000031
3. the method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 1, wherein the specific process of obtaining the fused image in step 3 is as follows:
step S31, obtaining information entropy of visible light focusing sharpened image and infrared focusing sharpened image
The calculation method of the information entropy E comprises the following steps:
Figure FDA0002699189170000032
wherein i represents the gray-scale value of the image, piThe ratio of the number of pixels with the gray value equal to i to the total number of pixels of the image, and L represents the maximum gray level;
respectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image obtained in the step S2 according to the formula;
step S32, determining the weighted fusion coefficient of the primary fusion image, wherein the fusion coefficients of the visible light focusing sharpened image and the infrared focusing sharpened image are respectively set as alpha and beta, and if the fusion coefficients are alpha and beta
Figure FDA0002699189170000033
Of formula (II) to'vis、E'infRespectively obtaining information entropy values of the visible light focusing sharpened image and the infrared focusing sharpened image;
step S33, acquiring the primary fused image, and performing weighted fusion on the visible light focusing sharpened image and the infrared focusing sharpened image to obtain a primary fused image as F′vis*α+F′infBeta is F'vis、F′infVisible light focusing sharpened images and infrared focusing sharpened images are respectively.
4. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 1, wherein the step of performing laplacian decomposition on the two sharpened images and the primary fused image in step S4 is:
step S41, establishing the Gaussian tower shape decomposition of the image as G0In the order of G0As the zeroth layer image of the Gaussian pyramid, the l-1 layer image G of the Gaussian pyramid is takenl-1Convolving with a window function omega (m, n) with low-pass characteristic, and then performing interlaced alternate column downsampling on the convolution result to obtain the first layer image of the Gaussian pyramid, namely
Figure FDA0002699189170000041
In the formula, N is the layer number of the top layer of the Gaussian pyramid; cl、RlRespectively representing the column number and the row number of the first layer image of the Gaussian pyramid; ω (m, n) is a two-dimensionally separable window function of size 5 × 5, expressed as:
Figure FDA0002699189170000042
from G0、G1、…、GNForming a Gaussian pyramid, wherein the total layer number of the pyramid is N + 1;
step S42, establishing Laplacian decomposition of the image to generate a pyramid of Gaussian GlInterpolating and amplifying to obtain the sum Gl-1Magnified images of the same size
Figure FDA0002699189170000043
Namely, it is
Figure FDA0002699189170000044
In the formula
Figure FDA0002699189170000045
The i-th layer image CP of the contrast pyramidlCan be expressed as
Figure FDA0002699189170000051
I is an identity matrix;
from CP0、CP1、…、CPNI.e. constituting a contrast pyramid.
5. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 1, wherein the other layer image fusing step in step S6 is:
comparing the pixel values of the corresponding pixel points of the two sharpened images and the other layer image obtained by decomposing the primary fused image, taking the pixel with the largest absolute value as the pixel value of the corresponding fused layer image at the corresponding point, and expressing the pixel value selection rule as follows:
F′others(x,y)=max(|F′visothers(x,y)|,|F′infothers(x,y)|,|F′firstfusionothers(x,y)|)
of formula (II) F'othersRepresents a decomposition layer fusion result image, F ', of a layer other than the top layer'visothers、F′infothers、F′firstfusionothersAnd the image data respectively represent a corresponding layer image obtained by decomposing a visible light focusing sharpened image, a corresponding layer image obtained by decomposing an infrared focusing sharpened image and a corresponding layer image obtained by decomposing a primary fusion image, and (x, y) is the coordinate position of an image pixel point.
6. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 4, wherein the image reconstruction in step S7 specifically comprises:
the contrast pyramid formula of each layer can be obtained as follows:
Figure FDA0002699189170000052
reconstructing the decomposed original image G according to the above formula layer-by-layer recursion0
7. The method for fusing the visible light image and the infrared image based on the laplacian pyramid as claimed in claim 3, wherein the step of obtaining the final fusion result image through the secondary fusion in step 8 comprises:
step S81, performing morphological gradient processing on the visible light and infrared focusing sharpened images respectively by using the following formula:
Gradient(F)=Dilate(F)-Erode(F)
wherein F is the original input image, scale (F) is the expansion operation function, and Erode (F) is the corrosion operation function;
step S82, performing secondary fusion, and if the fused image reconstructed in step S7 is FR1, the final fusion result image fresh is:
FResult=FR1+Gradient(F′vis)+Gradient(F′inf)。
CN202011016322.7A 2020-09-24 2020-09-24 Fusion method of visible light image and infrared image based on Laplacian pyramid Pending CN112184606A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011016322.7A CN112184606A (en) 2020-09-24 2020-09-24 Fusion method of visible light image and infrared image based on Laplacian pyramid

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011016322.7A CN112184606A (en) 2020-09-24 2020-09-24 Fusion method of visible light image and infrared image based on Laplacian pyramid

Publications (1)

Publication Number Publication Date
CN112184606A true CN112184606A (en) 2021-01-05

Family

ID=73956180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011016322.7A Pending CN112184606A (en) 2020-09-24 2020-09-24 Fusion method of visible light image and infrared image based on Laplacian pyramid

Country Status (1)

Country Link
CN (1) CN112184606A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643219A (en) * 2021-08-03 2021-11-12 武汉三江中电科技有限责任公司 Image imaging method and device based on three-light fusion
CN114494069A (en) * 2022-01-28 2022-05-13 广州华多网络科技有限公司 Image processing method, apparatus, device, medium, and product
CN115442523A (en) * 2022-08-17 2022-12-06 深圳昱拓智能有限公司 Method, system, medium and device for acquiring high-definition full-field-depth image of inspection robot
CN116152132A (en) * 2023-04-19 2023-05-23 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN116681633A (en) * 2023-06-06 2023-09-01 国网上海市电力公司 Multi-band imaging and fusion method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN106339998B (en) * 2016-08-18 2019-11-15 南京理工大学 Multi-focus image fusing method based on contrast pyramid transformation
CN111429391A (en) * 2020-03-23 2020-07-17 西安科技大学 Infrared and visible light image fusion method, fusion system and application

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN106339998B (en) * 2016-08-18 2019-11-15 南京理工大学 Multi-focus image fusing method based on contrast pyramid transformation
CN111429391A (en) * 2020-03-23 2020-07-17 西安科技大学 Infrared and visible light image fusion method, fusion system and application

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643219A (en) * 2021-08-03 2021-11-12 武汉三江中电科技有限责任公司 Image imaging method and device based on three-light fusion
CN113643219B (en) * 2021-08-03 2023-11-24 武汉三江中电科技有限责任公司 Image imaging method and device based on three-light fusion
CN114494069A (en) * 2022-01-28 2022-05-13 广州华多网络科技有限公司 Image processing method, apparatus, device, medium, and product
CN115442523A (en) * 2022-08-17 2022-12-06 深圳昱拓智能有限公司 Method, system, medium and device for acquiring high-definition full-field-depth image of inspection robot
CN115442523B (en) * 2022-08-17 2023-09-05 深圳昱拓智能有限公司 High-definition panoramic deep image acquisition method, system, medium and equipment of inspection robot
CN116152132A (en) * 2023-04-19 2023-05-23 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN116152132B (en) * 2023-04-19 2023-08-04 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN116681633A (en) * 2023-06-06 2023-09-01 国网上海市电力公司 Multi-band imaging and fusion method
CN116681633B (en) * 2023-06-06 2024-04-12 国网上海市电力公司 Multi-band imaging and fusion method

Similar Documents

Publication Publication Date Title
CN112184606A (en) Fusion method of visible light image and infrared image based on Laplacian pyramid
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
CN105957063B (en) CT image liver segmentation method and system based on multiple dimensioned weighting similarity measure
CN106339998B (en) Multi-focus image fusing method based on contrast pyramid transformation
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN105931226A (en) Automatic cell detection and segmentation method based on deep learning and using adaptive ellipse fitting
EP2284795A2 (en) Quantitative analysis, visualization and motion correction in dynamic processes
DE10048029A1 (en) Procedure for calculating a transformation connecting two images
Luo Pattern recognition and image processing
Li et al. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain
CN111179193B (en) Dermatoscope image enhancement and classification method based on DCNNs and GANs
CN113191979B (en) Non-local mean denoising method for partitioned SAR (synthetic aperture radar) image
DE102017220752A1 (en) Image processing apparatus, image processing method and image processing program
Yan et al. Improved mask R-CNN for lung nodule segmentation
CN111179173B (en) Image splicing method based on discrete wavelet transform and gradient fusion algorithm
Gao et al. Bayesian image super-resolution with deep modeling of image statistics
Dogra et al. An efficient image integration algorithm for night mode vision applications
CN108985320B (en) Multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition
Krishna et al. Machine learning based de-noising of electron back scatter patterns of various crystallographic metallic materials fabricated using laser directed energy deposition
Hernandez et al. Tactile imaging using watershed-based image segmentation
CN107945142B (en) Synthetic aperture radar image denoising method
Wang et al. New region-based image fusion scheme using the discrete wavelet frame transform
Kanchana et al. Texture classification using discrete shearlet transform
Dash et al. Gaussian pyramid based laws' mask descriptor for texture classification
Suneja et al. Cloud based Medical Image De-Noising using Deep Convolution Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105

RJ01 Rejection of invention patent application after publication