AU2021101531A4 - A Fusion Method of Infrared Image and Visible Image - Google Patents

A Fusion Method of Infrared Image and Visible Image Download PDF

Info

Publication number
AU2021101531A4
AU2021101531A4 AU2021101531A AU2021101531A AU2021101531A4 AU 2021101531 A4 AU2021101531 A4 AU 2021101531A4 AU 2021101531 A AU2021101531 A AU 2021101531A AU 2021101531 A AU2021101531 A AU 2021101531A AU 2021101531 A4 AU2021101531 A4 AU 2021101531A4
Authority
AU
Australia
Prior art keywords
image
fusion
layer
infrared
visible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021101531A
Inventor
Jingpeng Dai
Zhongqiang Luo
Xingzhong Xiong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University of Science and Engineering
Original Assignee
Sichuan University of Science and Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University of Science and Engineering filed Critical Sichuan University of Science and Engineering
Priority to AU2021101531A priority Critical patent/AU2021101531A4/en
Application granted granted Critical
Publication of AU2021101531A4 publication Critical patent/AU2021101531A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Abstract

A fusion method of infrared image and visible image is disclosed in the invention. The invention firstly decomposes infrared image and visible image into base layer and detail layer, which can remove small-scale structure while retaining edge details. Then image detail features are extracted layer by layer based on VGG-19 network, and an activity level map of infrared image and visible image is obtained. Finally, different fusion strategies are adopted for base layer and detail layer. The fusion result obtained by this method not only retains the texture information of visible image, but also retains the thermal radiation information of infrared image. The invention can be applied to the fields of target detection, target tracking, night vision, biological recognition, etc. FIGURES 1/3 StartY SI. Getting original guidance images of infrared images and visible image SI respectively. S2, The above original guidance images are iterated to obtain images of bas 2 layers and detail layers corresponding to the infrared image and visible image respeckely S3. Obtaining active maps of detail layer images corresponding to the nfrar S3 image and visible image, respectively. S4. The base layer images corresponding to the infrared image and the visib S4 image are fused to obtain the base fusion image. S5. Fusing active maps of the detail layer images corresponding to the infrarc S5 image and the visible image to obtain the detail layer fusion image.0 S6. The images of the base layer and the detail layer is fused to get the final S6 fusion image of the infrared image and the visible image. End* Figure 1 A flow diagram of the invention. Input image- base layer image Detail layer image Small scale structure removal- Input Input guidance guidance Edge Restoration- Edge Restoration Iterate once- Iterate twice- The fourth iteration (base layer image) Figure 2 The iterative process.

Description

FIGURES
1/3
StartY
SI. Getting original guidance images of infrared images and visible image SI respectively.
S2, The above original guidance images are iterated to obtain images of bas 2 layers and detail layers corresponding to the infrared image and visible image respeckely
S3. Obtaining active maps of detail layer images corresponding to the nfrar S3 image and visible image, respectively.
S4. The base layer images corresponding to the infrared image and the visib S4 image are fused to obtain the base fusion image.
S5. Fusing active maps of the detail layer images corresponding to the infrarc S5 image and the visible image to obtain the detail layer fusion image.0
S6. The images of the base layer and the detail layer is fused to get the final S6 fusion image of the infrared image and the visible image.
End*
Figure 1 A flow diagram of the invention.
Input image- base layer image
Detail layer image Small scale structure removal- Input Input
guidance guidance
Edge Restoration- Edge Restoration Iterate once- Iterate twice- The fourth iteration (base layer image)
Figure 2 The iterative process.
A Fusion Method of Infrared Image and Visible Image
TECHNICAL FIELD
The invention relates to the field of image processing, in particular to a fusion method of infrared
image and visible image.
BACKGROUND
The fusion of visible image and infrared image can achieve complementary information so as to
make the fused image contains more comprehensive and abundant information, which is more
aligned with the visual characteristics of human or machine and more conducive to the further
analysis and processing as well as automatic target recognition of images. The fusion of infrared
image and visible image retains the thermal radiation information of infrared image and the texture
information of visible image. It is widely used in target detection, target tracking, night vision,
biological recognition and other fields.
At present, the most widely studied infrared and visible image fusion methods are based on
multi-scale decomposition, sparse representation, saliency and deep learning. Among them,
multi-scale decomposition has the most mature research, such as pyramid transform, wavelet
transform, contourlet transform and so on. This kind of fusion method has strong robustness, but the
fusion results lack of deeper levels of image detail. In recent two years, deep learning has become a
hot research direction for image fusion because of its outstanding advantages in image processing.
The existing fusion methods based on deep learning have advantage in image detail reservation, but
there are still some limitations such as low fusion efficiency and fuzzy edge features.
SUMMARY
In view of the above shortcomings in the prior art, the invention provides a fusion method of
infrared image and visible image, which solves the problems of fuzzy edge features and omission of
fusion details in the prior art.
In order to achieve the above object of the invention, the technical scheme adopted by the invention
is as follows.
The fusion method of infrared image and visible image comprises the following steps.
Si. Getting original guidance images of infrared images and visible images respectively.
S2. The above original guidance images are iterated to obtain images of base layers and detail
layers corresponding to the infrared image and visible image, respectively.
S3. Obtaining active maps of detail layer images corresponding to the infrared image and visible
image, respectively.
S4. The base layer images corresponding to the infrared image and the visible image are fused to
obtain the base fusion image.
S5. Fusing active maps of the detail layer images corresponding to the infrared image and the
visible image to obtain the detail layer fusion image.
S6. The images of the base layer and the detail layer is fused to get the final fusion image of the
infrared image and the visible image.
Further, the specific method for acquiring the original guidance image in S Iis as follows.
Based on following formulas,
2
PqeN(p) S
Up= Y exp - q2] qc N(p) S
Gaussian filtering is applied to the pixel p on the source image to obtain the original guidance data
Gk(p) at the pixel p, and then the whole original guidance image Gk is obtained, Gk(p)E Gk
. Wherein, kE {I,V}, representing infrared image and visible image respectively; q represents the
adjacent pixel of pixel point p; Up is the regularization function; N(p) is the set of adjacent
pixels of pixel p; exp() represents an exponential function based on the natural constant e; Us
is a structural scale parameter; Xk(q)is the pixel q on the source imageX
. Further, the specific method of iterating the original guidance image in S2 is as follows.
According to the formula
Ok (p)= K' (p)= exp - 2 2Kp)-K'(q) X(q) UP qcN(p) S N
all inputs are set to Xk(q) throughout the iteration, wherein i is the number of iterations.
According to the result 0, (p) of the i-th iteration, the result 0, of the i-th iteration of the whole
original guidance image is obtained, that is, the base layer image Bk, O,(p)E- , =Bk. Wherein,
K'*'(p) represents the output result of the i -th iteration; K'(p) represents the output result of
the i-1-th iteration; K'(q) represents the iterative output of pixel q adjacent to pixel p; UN is
the range weight.
Then the detail layer image D. is obtained according to the formula
Dk=Xk- Bk
Further, the maximum number of iterations for the original guidance image is 4.
Further, the specific method for obtaining the activity map of the detail layer image in S3 includes
the following sub-steps.
S3-1. According to the formula, a VGG-19 network with four convolution layers is established
(j,1:M = ©,(D
) Channel maps p'I" of detail layer image in the j-th convolution layer are obtained, wherein kE
{I,V}, representing infrared image and visible image respectively; D. Representing the detail
layerimage; (Dj(.) represents the j-th convolution layer of VGG-19 network; M = 64x 2'.
S3-2. The initial activity level data Q(x,y) at the coordinate (x,y) of the detail layer image is
obtained based on the following formula,
Q(x y=9~:(Xy)
Then, the initial activity level map Q/ corresponding to the whole detail layer image is obtained,
wherein QA(x, y)E Q and *,represents the norm I .
S3-3. The active map Oj(x,y) at the coordinate (x,y) of the detail layer image is obtained
according to the formula of
Qj / x+ ,y+p) (2c+1)2
Then, the active map a.' corresponding to the whole detail layer image is obtained, where
O~j (x,y)E andw is the determining parameter of the block size.
Further, the specific method in S4 includes the following sub-steps.
S4-1. The base layer imageBkis transformed from the mxn two-dimensional matrix to the
single-row matrixBk, wherein the element value at(((x-1)x n+1):(xx n))of single-row matrixBk
is the x-th row element value.
S4-2. Based on the formula of
WB,(Xy)=mapminmax(B,' ,0,1)= Bk'(xy)-min(Bk') max(Bk )-min(Bk')
the mapminmax function is used to normalize the single-row matrix B, to obtain the weights
WBk (x,y) of the elements Bk'(x,y) at the points (x,y). Then the weight matrix WB, of the
whole is obtained, WBk (X,y)E WBk. Wherein, kE {J,V}, representing infrared image and visible
image respectively;mapminmax(Bk' ,0,1) means to normalize elements of the single-row matrix
Bk to(0,1); min(Bk') represents the minimum pixel value in the single-row matrixBk;max(B,')
represents the maximum pixel value in the single-row matrix Bk.
S4-3. According to the formula
FB ( xP:)= WB, xx k, ke{IV}
the weight matrix corresponding to the infrared image and the weight matrix corresponding to the
visible image are fused to get the fusion result FB( x,:) of the row x, and then the overall fusion result
F, namely the base layer fusion image, is obtained. Wherein, FB(x,:)E F; Bk(x,:) represents
the element value of row x inBk; WB (x) represents the weight corresponding to the element value
in row x.
Further, the specific method in S5 includes the following sub-steps.
S5-1. Characteristic mapping weight maps W of the activity map in the four convolution layers
are obtained base on the following formula:
wi1 j WA= 1 .1Q ke{JV}
S5-2. According to the formula
WI (x+a,y+b)=W (,y)
carrying out up-sampling at the (x,y) of the characteristic mapping weight map W to obtain the weight map W (x+ a, y+b) after registration at (x,y), and then the weight map W after overall registration is obtained, wherein a,bE ,1,...,2 - 1.
S5-3. According to the following formula, the data fW,(x,y) at (x,y)of the weight map
WDafter registration and the data D,(x,y) at (x,y) of the detail layer image D. are fused to
obtain the detail layer fusion image F (x,y) at (x,y), and then the detail layer fusion image
F' is obtained, FA(x,y)E F .
Fi~yJ T(x,y)xD,) kc{I,V}
Further, the specific method in S6 includes the following steps.
Based on formula
F=FB+ Fj jE{1,2,3,4}
the fusion image of the base layer FB and the fusion image of the detail layer are added to obtain the
fusionimage F of the infrared image and the visible image.
Further, the determining parameter coof block size is 1.
The beneficial effect of the invention is described below.
1. The invention firstly decomposes infrared image and visible image into base layer and detail
layer, which can remove small-scale structure while retaining edge details. Then image detail
features are extracted layer by layer based on VGG-19 network, and an activity level map of
infrared image and visible image is obtained. Finally, different fusion strategies are adopted for base
layer and detail layer. The fusion result obtained by this method not only retains the texture
information of visible image, but also retains the thermal radiation information of infrared image.
The invention can be applied to thefields of target detection, target tracking, night vision, biological
recognition, etc.
2. Compared with the traditional multi-scale decomposition method and the method based on deep
learning, the present invention has advantages in preserving the deep details of the fused image and
in the efficiency of edge feature detection. Simulation experiments are carried out on TNO infrared
and visible image dataset (TNO dataset), and the fusion results have clear detail texture subjectively
in human visual system. Compared with other existing typical methods for qualitative index
evaluation, the invention has advantages in common infrared and visible image fusion quality
evaluation indexes, such as entropy, spatial frequency, standard deviation, average gradient and
mutual information, etc.
BRIEF DESCRIPTION OF THE FIGURES
Figure 1 is a flow diagram of the invention.
Figure 2 shows the iterative process.
Figure 3 is a detailed comparison of the fusion results of this method and other five existing
methods.
Figure 4 shows the fusion result comparison of 10 pairs of images selected from TNO dataset
between this method and other 7 existing methods.
Figure 5 shows the comparison of the fusion result quality evaluation indexes between this method
and other 7 existing methods.
DESCRIPTION OF THE INVENTION
The specific embodiments of the invention are described below to facilitate those technicians in the
field to understand the invention. However, it should be clear that the invention is not limited to the
scope of the specific embodiments. For ordinary technicians in the art, various changes within the spirit and scope of the invention defined and determined by the attached claims, are obvious. All inventions and creations based on the concept of the invention are included in the protection.
As shown in Fig. 1, the fusion method of infrared image and visible image includes the following
steps.
Si. Getting original guidance images of infrared images and visible images respectively.
S2. The above original guidance images are iterated to obtain images of base layers and detail
layers corresponding to the infrared image and visible image, respectively.
S3. Obtaining active maps of detail layer images corresponding to the infrared image and visible
image, respectively.
S4. The base layer images corresponding to the infrared image and the visible image are fused to
obtain the base fusion image.
S5. Fusing active maps of the detail layer images corresponding to the infrared image and the
visible image to obtain the detail layer fusion image.
S6. The images of the base layer and the detail layer is fused to get the final fusion image of the
infrared image and the visible image.
In Sl, the specific method for acquiring the original guidance image is as follows.
Based on following formulas,
2
P qeN(p) S
Up= Y expr- p-q2 qcN(p) S
Gaussian filtering is applied to the pixel p on the source image to obtain the original guidance data
Gk(p) at the pixel p, and then the whole original guidance image Gk is obtained, Gk(p)E Gk .
Wherein, kE {I,V}, representing infrared image and visible image respectively; q represents the
adjacent pixel of pixel p; Up is the regularization function; N(p) is the set of adjacent pixels of
pixel p; exp() represents an exponential function based on the natural constant e; U, is a
structural scale parameter; Xk(q) is the pixel point q on the source image Xk.
As shown in Fig.2, the specific method of iterating the original guidance image in S2 is as follows.
According to the formula
i+1 1 ||p-q||12 K'(p) -K'(q
) Ok (p)= K' (p)= exp U2 2 X (q) UP qcN(p) S N
all inputs are set to Xk(q) throughout the iteration, wherein i is the number of iterations.
According to the result 0, (p) of the i-th iteration, the result 0, of the i-th iteration of the whole
original guidance image is obtained, that is, the base layer image Bk, O,(p)E- , =Bk. Wherein,
K'*'(p) represents the output result of the i -th iteration; K'(p) represents the output result of
the i-1-th iteration; K'(q) represents the iterative output of pixel q adjacent to pixelp; UN is
the range weight.
Then the detail layer image D. is obtained according to the formula
Dk=Xk-Bk
And the maximum number of iterations for the original guidance image is 4.
The specific method for obtaining the activity map of the detail layer image in S3 includes the
following sub-steps.
S3-1. According to the formula, a VGG-19 network with four convolution layers is established
(j,1:M = ©,(D )
Channel maps p'I: of detail layer image in thej-th convolution layer are obtained, wherein kE
{I,V}, representing infrared image and visible image respectively; D. Representing the detail layerimage; CD(.) represents the j-th convolution layer of VGG-19 network; M = 64x 2j-1.
S3-2. The initial activity level data Qk(x,y) atthecoordinate (x,y) of the detail layer image is
obtained based on the following formula,
jk j,1)= lqP:M (X )1
Then, the initial activity level map Q corresponding to the whole detail layer image is obtained,
wherein Q/(x,y)e Qk and represents the norm 11.
S3-3. The active map Q(x,y) at the coordinate (x,y) of the detail layer image is obtained
according to the formula of
QXx, y = 2 y+U
(2+1)+
Then, the active map aQ corresponding to the whole detail layer image is obtained, where
kI(x,y)E Q ando is 1 as the determining parameter of the block size.
The specific method in S4 includes the following sub-steps.
S4-1. The base layer imageBkis transformed from the mxn two-dimensional matrix to the
single-row matrixBk, wherein the element value at(((x - )x n+1):(xx n))of single-row matrix Bk
is the x-th row element value.
S4-2. Based on the formula of
WB,(x,y)= mapminmax(Bk' ,0,1)- B k(xy)-min(Bk') max(Bk')-min(Bk')
the mapminmax function is used to normalize the single-row matrix Bk to obtain the weights
WBk (x, y) of the elements Bk' (x, y) at the points (x, y). Then the weight matrix WB of the
whole is obtained, WBk (x,y)E-Bk . Wherein, kE{IV}, representing infrared image and visible
image respectively;mapminmax(Bk' ,,1) means to normalize elements of the single-row matrix
Bk to(0,1); min(B') represents the minimum pixel value in the single-row matrix Bk;max(B,')
represents the maximum pixel value in the single-row matrix B.
S4-3. According to the formula
FB ( x,: WBk (x) x B.-(,:,) kE {I,V}
the weight matrix corresponding to the infrared image and the weight matrix corresponding to the
visible image are fused to get the fusion result FB (x,:) of the row x, and then the overall fusion result
FB, namely the base layer fusion image, is obtained. Wherein, FB (x,:)E FB; Bk (x,:) represents
the element value of row x inBk; WB, (x) represents the weight corresponding to the element value
in row x.
The specific method in S5 includes the following sub-steps.
S5-1. Characteristic mapping weight maps WJ of the activity map in the four convolution layers
are obtained base on the following formula:
V1 .
Dk
ke{JI,V}
S5-2. According to the formula
WI (x+a,y b)=W)k(x, y)
carrying out up-sampling at the (x,y) of the characteristic mapping weight map WD to obtain
the weight map W (x+ a, y+b) after registration at (x,y), and then the weight map W after
overall registration is obtained, wherein a,bE ,1,...,2 -11 .
S5-3. According to the following formula, the data fr/(x,y) at (x,y)of the weight map
W after registration and the data Dk(x,y) at (x,y) of the detail layer image Dk are fused to
obtain the detail layer fusion image Fj(x,y) at (x,y), and then the overall detail layer fusion
image F is obtained, F (x,y)EF.
Fjxy^=Z T (x,y)xDk('Y) kE(k ke{I,V}
The specific method in S6 includes the following steps.
Based on formula
F=FB + F jc{1,2,3,4}
the fusion image of the base layer FB and the fusion image of the detail layer are added to obtain the
fusion image F of the infrared image and the visible image.
In one embodiment of the invention, the field image is fused according to Fig. 3. Fig. 3 (a), Fig. 3
(b), Fig. 3 (c), Fig. 3 (d) and Fig. 3 (e) are all fusion results obtained by the prior art. Fig. 3 (f) is the
fusion result obtained by the method. It can be seen from the box in the lower left comer of the
figure that the fusion image obtained by this method has clearer detail texture subjectively in human
visual system.
In another embodiment of the invention, the fusion results of 10 pairs of images selected from TNO
dataset of this method are compared with those of other 7 existing methods, as shown in Fig. 4. In
Fig. 4, from top to bottom, each row is: visible image, infrared image, fusion result based on
convolutional neural network, fusion result based on rolling guidance filter, fusion result based on
multi-level decomposition latent low-rank representation, fusion result based on visual saliency
map and weight least square filter, fusion result based on non-subsampled contourlet transform,
fusion result based on infrared feature extraction and visual information preservation, the fusion
result based on the residual network, and the fusion result of the method proposed in the invention.
As can be seen from Fig. 4, this method has advantages in retaining deep details and detecting edge
features efficiently. The fusion result has clear detail texture subjectively in human visual system.
In this embodiment, as shown in Fig. 5, the fusion result quality evaluation indexes of this method and other seven existing methods are also visually compared, in which bold line indicates that the best method, double underline indicates the second best, and single underline indicates the third best. It can be seen that this method is the best in spatial frequency, standard deviation and average gradient, and also in the top three in entropy and mutual information. Moreover, the overall effect of the method is better than the above existing technology.
In summary, the invention firstly decomposes infrared image and visible image into base layer and
detail layer, which can remove small-scale structure while retaining edge details. Then image detail
features are extracted layer by layer based on VGG-19 network, and an activity level map of
infrared image and visible image is obtained. Finally, different fusion strategies are adopted for base
layer and detail layer. The fusion result obtained by this method not only retains the texture
information of visible image, but also retains the thermal radiation information of infrared image.
The invention can be applied to the fields of target detection, target tracking, night vision, biological
recognition, etc.

Claims (9)

THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:
1. A fusion method of infrared image and visible image, characterized by including following steps.
Si. Getting original guidance images of infrared images and visible images respectively.
S2. The above original guidance images are iterated to obtain images of base layers and detail
layers corresponding to the infrared image and visible image, respectively.
S3. Obtaining active maps of detail layer images corresponding to the infrared image and visible
image, respectively.
S4. The base layer images corresponding to the infrared image and the visible image are fused to
obtain the base fusion image.
S5. Fusing active maps of the detail layer images corresponding to the infrared image and the
visible image to obtain the detail layer fusion image.
S6. The images of the base layer and the detail layer is fused to get the final fusion image of the
infrared image and the visible image.
2. The fusion method of infrared image and visible image according to Claim 1, characterized in
that the specific method for acquiring the original guidance image in S Iis as follows.
Based on following formulas,
2
P qeN(p) S
Up= Y expr- p-q2 qcN(p) S
Gaussian filtering is applied to the pixel p on the source image to obtain the original guidance data
Gk(p) at the pixel p, and then the whole original guidance image Gk is obtained, Gk(p)E Gk .
Wherein, k {jI,V}, representing infrared image and visible image respectively; q represents the adjacent pixel of pixel point p; Up is the regularization function; N(p) is the set of adjacent pixels of pixel p; exp() represents an exponential function based on the natural constant e;Us is a structural scale parameter; Xk(q) is the pixel q on the source image X.
3. The fusion method of infrared image and visible image according to Claim 2, characterized in
that the specific method of iterating the original guidance image in S2 is as follows.
According to the formula
i+1 1 ||p-q||12 K'(p) -K'(q
) Ok (p)= K' (p)= exp U2 2 X (q) UP qcN(p) S N
all inputs are set to Xk(q) throughout the iteration, wherein i is the number of iterations.
According to the result 0, (p) of the i-th iteration, the result 0, of the i-th iteration of the whole
original guidance image is obtained, that is, the base layer imageBk,, (p)E O, =Bk. Wherein,
K'*'(p) represents the output result of the i -th iteration; K'(p) represents the output result of
the i-1-th iteration; K'(q) represents the iterative output of pixel q adjacent to pixel p; UN is
the range weight.
Then the detail layer image D. is obtained according to the formula
D k=Xk -Bk
4. The fusion method of infrared image and visible image according to Claim 3, characterized in
that the maximum number of iterations for the original guidance image is 4.
5. The fusion method of infrared image and visible image according to Claim 1, characterized in
that the specific method for obtaining the activity map of the detail layer image in S3 includes the
following sub-steps.
S3-1. According to the formula, a VGG-19 network with four convolution layers is established
(j,1:M = (D )
M channel maps pj'I:M of detail layer image in the j-th convolution layer are obtained, wherein k E
{I,V}, representing infrared image and visible image respectively; D Representing the detail
layerimage; CDj(.) represents the j-th convolution layer of VGG-19 network; M = 64x 2j-1.
S3-2. The initial activity level data Q/(x,y) atthecoordinate (x,y) of the detail layer image is
obtained based on the following formula,
Then, the initial activity level map Qk corresponding to the whole detail layer image is obtained,
wherein Qj(x, y)e Qk and |represents the norm 11.
S3-3. The active map Q(x,y) at the coordinate (x,y) of the detail layer image is obtained
according to the formula of
j (x( ±)= 2
QkY) (20o +1)2
Then, the active map ak corresponding to the whole detail layer image is obtained, where
aj(x,y)EaQ/ anda> is the determining parameter of the block size.
6. The fusion method of infrared image and visible image according to Claim 1, characterized in
that the specific method in S4 includes the following sub-steps.
S4-1. The base layer image Bk is transformed from the mxn two-dimensional matrix to the
single-row matrixBk, wherein the element value at(((x - )x n+1):(xx n))of single-row matrix B
is the x-th row element value.
S4-2. Based on the formula of
WBk (x, y)= mapminmax(Bk' ,0,1)- Bk (xy)-min(Bk') max(Bk' )-min(Bk')
the mapminmax function is used to normalize the single-row matrix Bk to obtain the weights
WB (X,y) of the elements B,'(x,y) at the points (x,y). Then the weight matrix W, of the
whole is obtained, WB, (X,y)c WB . Wherein, k {JI,V}, representing infrared image and visible
image respectively;mapminmax(B,' ,0,1) means to normalize elements of the single-row matrix
Bk to(0,1); min(B,') represents the minimum pixel value in the single-row matrix Bk; max(B,')
represents the maximum pixel value in the single-row matrix B
. S4-3. According to the formula
FB ( x,:l WBk (x)x Bk(,:,) kE {I,V}
the weight matrix corresponding to the infrared image and the weight matrix corresponding to the
visible image are fused to get the fusion result FB (x,:) of the row x, and then the overall fusion result
FB, namely the base layer fusion image, is obtained. Wherein, FB (x,:)E FB; B, (x,:) represents
the element value of row x inB; WB, (x) represents the weight corresponding to the element value
in row x.
7. The fusion method of infrared image and visible image according to Claim 5, characterized in
that the specific method in S5 includes the following sub-steps.
S5-1. Characteristic mapping weight maps WJ of the activity map in the four convolution layers
are obtained base on the following formula:
V1 .
k Q ke{JI,V}
S5-2. According to the formula
WI (x+a,y b)=W)k(x, y)
carrying out up-sampling at the (x,y) of the characteristic mapping weight map WJ to obtain
the weight map W (x+ a, y+b) after registration at (x,y), and then the weight map W after
overall registration is obtained, wherein a,bE ,1,...,2 -11 .
S5-3. According to the following formula, the data fW,(x,y) at (x,y)of the weight map
W .after registration and the data D,(x,y) at (x,y) of the detail layer image D, are fused to
obtain the detail layer fusion image FD(x,y) at (x,y), and then the overall detail layer fusion
image F isobtained, F (x,y) F.
F|(x~y)= D )ZDk(Y T (x, y) xD,0y k (XIY) kc{I,V}
8. The fusion method of infrared image and visible image according to Claim 7, characterized in
that the specific method in S6 includes the following steps.
Based on formula
F=FB + Fj jE{1,2,3,4}
the fusion image of the base layer FB and the fusion image of the detail layer are added to obtain the
fusionimage F of the infrared image and the visible image.
9. The fusion method of infrared image and visible image according to Claim 5, characterized in
that the determining parameter w of block size is 1.
FIGURES
1/3 2021101531
Figure 1 A flow diagram of the invention.
Figure 2 The iterative process.
AU2021101531A 2021-03-25 2021-03-25 A Fusion Method of Infrared Image and Visible Image Ceased AU2021101531A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021101531A AU2021101531A4 (en) 2021-03-25 2021-03-25 A Fusion Method of Infrared Image and Visible Image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021101531A AU2021101531A4 (en) 2021-03-25 2021-03-25 A Fusion Method of Infrared Image and Visible Image

Publications (1)

Publication Number Publication Date
AU2021101531A4 true AU2021101531A4 (en) 2021-05-13

Family

ID=75829049

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021101531A Ceased AU2021101531A4 (en) 2021-03-25 2021-03-25 A Fusion Method of Infrared Image and Visible Image

Country Status (1)

Country Link
AU (1) AU2021101531A4 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610738A (en) * 2021-08-06 2021-11-05 烟台艾睿光电科技有限公司 Image processing method, device, equipment and computer readable storage medium
CN113674192A (en) * 2021-08-24 2021-11-19 燕山大学 Method, system and device for fusing infrared video image and visible light video image
CN117351049A (en) * 2023-12-04 2024-01-05 四川金信石信息技术有限公司 Thermal imaging and visible light fusion measuring point registration guiding method, device and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610738A (en) * 2021-08-06 2021-11-05 烟台艾睿光电科技有限公司 Image processing method, device, equipment and computer readable storage medium
CN113674192A (en) * 2021-08-24 2021-11-19 燕山大学 Method, system and device for fusing infrared video image and visible light video image
CN113674192B (en) * 2021-08-24 2024-02-02 燕山大学 Method, system and device for fusing infrared video image and visible light video image
CN117351049A (en) * 2023-12-04 2024-01-05 四川金信石信息技术有限公司 Thermal imaging and visible light fusion measuring point registration guiding method, device and medium
CN117351049B (en) * 2023-12-04 2024-02-13 四川金信石信息技术有限公司 Thermal imaging and visible light fusion measuring point registration guiding method, device and medium

Similar Documents

Publication Publication Date Title
AU2021101531A4 (en) A Fusion Method of Infrared Image and Visible Image
CN107563381B (en) Multi-feature fusion target detection method based on full convolution network
Sang et al. Improved crowd counting method based on scale-adaptive convolutional neural network
CN110728209A (en) Gesture recognition method and device, electronic equipment and storage medium
CN106023245B (en) Moving target detecting method under the static background measured based on middle intelligence collection similarity
CN104636732A (en) Sequence deeply convinced network-based pedestrian identifying method
CN105279485B (en) The detection method of monitoring objective abnormal behaviour under laser night vision
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN116343330A (en) Abnormal behavior identification method for infrared-visible light image fusion
CN109447036A (en) A kind of segmentation of image digitization and recognition methods and system
CN109034374A (en) The relative depth sequence estimation method of convolutional network is intensively connected to using multi-scale
CN113111758A (en) SAR image ship target identification method based on pulse neural network
CN113592894A (en) Image segmentation method based on bounding box and co-occurrence feature prediction
CN104794726B (en) A kind of underwater picture Parallel segmentation method and device
CN116611478A (en) Industrial process data enhancement method for generating countermeasure network based on depth threshold
CN106355210A (en) Method for expressing infrared image features of insulators on basis of depth neuron response modes
CN110135435B (en) Saliency detection method and device based on breadth learning system
CN113592893B (en) Image foreground segmentation method for determining combination of main body and accurate edge
CN112232403A (en) Fusion method of infrared image and visible light image
CN108090460A (en) Expression recognition feature extraction algorithm based on multi-direction description of weber
CN106611418A (en) Image segmentation algorithm
CN106709480A (en) Partitioning human face recognition method based on weighted intensity PCNN model
CN107529647A (en) A kind of cloud atlas cloud amount computational methods based on the unsupervised sparse learning network of multilayer
Amakhchan et al. Automatic filtering of LiDAR building point cloud using multilayer perceptron Neuron Network
Li et al. Remote sensing image scene classification via regional growth-based key area fine location and multilayer feature fusion

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry