CN109360175A - A kind of infrared image interfusion method with visible light - Google Patents

A kind of infrared image interfusion method with visible light Download PDF

Info

Publication number
CN109360175A
CN109360175A CN201811187745.8A CN201811187745A CN109360175A CN 109360175 A CN109360175 A CN 109360175A CN 201811187745 A CN201811187745 A CN 201811187745A CN 109360175 A CN109360175 A CN 109360175A
Authority
CN
China
Prior art keywords
image
fusion
base
component
details coefficients
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811187745.8A
Other languages
Chinese (zh)
Inventor
聂仁灿
刘栋
周冬明
贺康建
李华光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan University YNU
Original Assignee
Yunnan University YNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan University YNU filed Critical Yunnan University YNU
Priority to CN201811187745.8A priority Critical patent/CN109360175A/en
Publication of CN109360175A publication Critical patent/CN109360175A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of infrared image interfusion method with visible light, this method obtains the image that fusion mass is more preferable, local detail is more completed and can directly be observed by human eye vision to infrared image and visible images based on depth convolutional neural networks and conspicuousness detection algorithm after the processing of picture breakdown, image co-registration and image superposition.

Description

A kind of infrared image interfusion method with visible light
Technical field
The invention belongs to technical field of image processing, and in particular to a kind of infrared image interfusion method with visible light.
Background technique
Image fusion technology is an important branch of information fusion, is computer vision and area of pattern recognition One of research hotspot.Wherein, infrared and visual image fusion technology plays the role of highly important in military field, it is seen that Light image is we show all objects in a width scene as far as the eye can reach, and infrared image can then show certain positioned at obstacle The image of object after object can be by the conspicuousness ingredient and visible light in infrared image by the application of image fusion technology Background image is wanted to merge, obtained blending image the object in Same Scene can be carried out it is more accurate, more comprehensively, more reliable retouch It states, meanwhile, also easily facilitate intuitive human eye vision observation.
Currently, being widely used based on the Image Fusion of multi-scale transform in image co-registration field, such as: Laplce Pyramid transform, wavelet transform, non-down sampling contourlet transform, non-lower sampling shearing wave conversion etc., these are based on more The algorithm of change of scale can be summarized simply as follows following steps: 1. pairs of source images carry out multi-resolution decomposition, to obtain different letters Cease component map;2. the characteristics of according to different images component, chooses different fusion rules and individually merges to respective component;3. Final blending image is obtained using inverse multi-scale transform.
However, the image obtained after the processing of existing Image Fusion can lose part details, texture and cause image It is unintelligible or even be difficult to directly be observed by human eye.
Summary of the invention
In order to improve infrared and visible images fusion mass, the present invention provides a kind of infrared images with visible light The technical solution of fusion method, this method is as follows:
Step 1. picture breakdown, utilizes l0-l1Regularization model decomposes infrared image and visible images, respectively Obtain Liang Ge base component B1、B2With two details coefficients D1、D2, l0-l1The calculation formula of regularization model is as follows:
In formula, p is image pixel index, and N is the total pixel number of input picture, and S, B, S-B is respectively input picture, base Image and detail pictures;l1The sparse item of gradientBase's component of representative image, l0The sparse item of gradientThe details coefficients of representative image;
By applying above-mentioned decomposition model, the base's component map and details coefficients figure of available input picture:
Dk=Sk-Bk
Wherein, k=1,2 respectively indicate the infrared and visible light source image of input.
Step 2. image co-registration is right respectively using different fusion rules according to the feature of base's component and details coefficients Base's component and details coefficients are merged:
A. base's component fusion rule: conspicuousness detection algorithm is utilized to extract the conspicuousness of infrared image and visible images Component, the Liang Ge base component B obtained in conjunction with picture breakdown1、B2It is merged, obtains fusion base figure Fb, infrared image and The conspicuousness component of visible images is obtained by following formula:
S (x, y)=| | Iμ-IG(x,y)||
In formula, IμFor the mean value pixel map of image, IG(x, y) is image guiding filtering figure;
It merges base and schemes FbIt is obtained by following formula:
Fb=S (x, y) * B1+(1-S(x,y))*B2
B. two details coefficients D of depth convolutional neural networks VGG19 model extraction details coefficients fusion rule: are utilized1、D2 Depth characteristic and obtain multilayer feature figure, the weight of each layer characteristic pattern is merged to obtain with details coefficients each layer fusion point Amount chooses optimal result in each layer fusion component and is used as fusion detail view Fd
Wherein, the weight of each layer characteristic pattern is determined by following formula:
In formulaBy the norm l for calculating each layer characteristic pattern1Norm obtains;
Each layer fusion component and fusion detail view FdIt is obtained respectively by following formula:
Step 3. image superposition, fusion base scheme FbWith fusion detail view FdUsing the method for addition, fusion is finally obtained RGB color image is added formula are as follows: F=Fb+Fd
Beneficial effects of the present invention: the present invention objectively evaluates index by calculating some common image co-registrations, with other It is several tradition blending algorithms compare, no matter from subjective vision effect or objectively evaluate in standard be better than other comparison diagrams As blending algorithm, inventive algorithm can be very good the details, texture and the main feature information that retain source images, this to merge Image afterwards is relatively sharp, reliable, human eye intuitive visual of being more convenient for is observed.
Detailed description of the invention
Fig. 1: mixing l0-l1Regularized image decomposition model;
Fig. 2: base's component fusion rule;
Fig. 3: details coefficients fusion rule;
Fig. 4: the image reconstruction model in details coefficients Fusion Model.
Specific embodiment
As shown in Figure 1, the present invention provides a kind of infrared image interfusion methods with visible light.With reference to the accompanying drawing to this Invention is described in further detail.
A kind of infrared image interfusion method with visible light, comprising the following steps:
Step 1. is decomposed infrared with visible images, and Liang Ge base component B is respectively obtained1、B2With two details point Measure D1、D2, wherein decomposition model is as follows:
Wherein, p is image pixel index, and N is the total pixel number of input picture, and S, B, S-B is respectively input picture, base Image and detail pictures.l1The sparse item of gradientThe base layer information component of image is represented, the detailed information component of image is then By l0The sparse item of gradientIt indicates.By apply above-mentioned decomposition model, the base of available input picture with Levels of detail component information figure:
Dk=Sk-Bk (3)
Wherein, k=1,2 respectively indicate the infrared and visible light source image of input.Picture breakdown process is as shown in Fig. 1.
Step 2. takes different fusion rules to merge respectively to it according to the feature of base and levels of detail component:
A. base's component fusion rule: conspicuousness detection algorithm is utilized to extract the former infrared conspicuousness with visible images Figure, is merged in conjunction with step 1 Liang Ge base component obtained, obtains fusion base figure (Fb), as shown in Fig. 2;It is red Outer saliency component can be obtained by following formula:
S (x, y)=| | Iμ-IG(x,y)|| (4)
Wherein, IμFor the mean value pixel map of image, IG(x, y) is image guiding filtering figure.The base's figure merged as a result, can It is obtained by formula (5):
Fb=S (x, y) * B1+(1-S(x,y))*B2 (5)
B, two levels of detail D of VGG19 model extraction details coefficients fusion rule: are utilized1、D2Depth characteristic, and choose conjunction Suitable weight merges levels of detail, to obtain fusion detail view Fd, as shown in Fig. 3.It is such as attached for image reconstruction part Shown in Fig. 4, by the l for calculating each layer characteristic pattern1Norm obtains the activation figures of individual featuresIt thus can be according to formula (6) Calculate the weight map of each layer characteristic pattern
The weight of each layer characteristic pattern is combined with details coefficients, to obtain the fusion component of each layer characteristic pattern, is chosen each Optimal result merges component as final details in layer fusion component, as shown in formula (7)-(8):
Step 3. is finally to the fusion component map F obtained in step 2b、FdUsing the method for addition, fusion is finally obtained RGB color image afterwards, as shown in formula (9):
F=Fb+Fd (9)
Fusion mass evaluation of estimate of the table 1. based on blending image obtained by different fusion methods
From being objectively evaluated shown in table 1 in index as can be seen that numerical value phase of this paper algorithm in these evaluation objective indicators It is more more effective than other algorithms, it was demonstrated that this paper algorithm is to infrared with validity and feasibility of visual image fusion.
The present invention is not limited to above-mentioned preferred forms, anyone can show that other are various under the inspiration of the present invention The product of form, however, make any variation in its shape or structure, it is all that there is skill identical or similar to the present application Art scheme, is within the scope of the present invention.

Claims (5)

1. a kind of infrared image interfusion method with visible light, which comprises the following steps:
Step 1. picture breakdown, utilizes l0-l1Regularization model decomposes infrared image and visible images, respectively obtains Liang Ge base component B1、B2With two details coefficients D1、D2
Step 2. image co-registration, according to the feature of base's component and details coefficients, using different fusion rules respectively to base Component and details coefficients are merged:
A. base's component fusion rule: conspicuousness detection algorithm is utilized to extract the conspicuousness point of infrared image and visible images Amount, Liang Ge base component B1, the B2 obtained in conjunction with picture breakdown are merged, and obtain fusion base figure Fb
B. two details coefficients D of depth convolutional neural networks VGG19 model extraction details coefficients fusion rule: are utilized1、D2Depth Degree feature simultaneously obtains multilayer feature figure, and the weight of each layer characteristic pattern is merged to obtain each layer fusion component, choosing with details coefficients Optimal result in each layer fusion component is taken to be used as fusion detail view Fd
Step 3. image superposition, fusion base scheme FbWith fusion detail view FdUsing the method for addition, the RGB of fusion is finally obtained Color image.
2. according to the method described in claim 1, it is characterized in that l in step 10-l1Regularization model, the model is according to such as Lower formula decomposes infrared image and visible images:
In formula, p is image pixel index, and N is the total pixel number of input picture, and S, B, S-B is respectively input picture, base layer image And detail pictures;l1The sparse item of gradientBase's component of representative image, l0The sparse item of gradientIt represents The details coefficients of image;
By applying above-mentioned decomposition model, the base's component map and details coefficients figure of available input picture:
Dk=Sk-Bk
Wherein, k=1,2 respectively indicate the infrared and visible light source image of input.
3. according to the method described in claim 1, it is characterized in that the infrared image of base's component fusion rule in step 2 and The conspicuousness component of visible images is obtained by following formula:
S (x, y)=| | Iμ-IG(x,y)||
In formula, IμFor the mean value pixel map of image, IG(x, y) is image guiding filtering figure;
It merges base and schemes FbIt is obtained by following formula:
Fb=S (x, y) * B1+(1-S(x,y))*B2
4. according to the method described in claim 1, it is characterized in that weight is by following in details coefficients fusion rule in step 2 Formula determines:
Each layer fusion component and fusion detail view FdIt is obtained respectively by following formula:
5. according to the method described in claim 1, it is characterized in that the fusion base in step 3 schemes FbWith fusion detail view FdIt adopts Addition formula are as follows:
F=Fb+Fd
CN201811187745.8A 2018-10-12 2018-10-12 A kind of infrared image interfusion method with visible light Pending CN109360175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811187745.8A CN109360175A (en) 2018-10-12 2018-10-12 A kind of infrared image interfusion method with visible light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811187745.8A CN109360175A (en) 2018-10-12 2018-10-12 A kind of infrared image interfusion method with visible light

Publications (1)

Publication Number Publication Date
CN109360175A true CN109360175A (en) 2019-02-19

Family

ID=65348968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811187745.8A Pending CN109360175A (en) 2018-10-12 2018-10-12 A kind of infrared image interfusion method with visible light

Country Status (1)

Country Link
CN (1) CN109360175A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097617A (en) * 2019-04-25 2019-08-06 北京理工大学 Image interfusion method based on convolutional neural networks Yu conspicuousness weight
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110189286A (en) * 2019-05-30 2019-08-30 兰州交通大学 A kind of infrared and visible light image fusion method based on ResNet
CN110335225A (en) * 2019-07-10 2019-10-15 四川长虹电子系统有限公司 The method of infrared light image and visual image fusion
CN111179208A (en) * 2019-12-09 2020-05-19 天津大学 Infrared-visible light image fusion method based on saliency map and convolutional neural network
CN112232403A (en) * 2020-10-13 2021-01-15 四川轻化工大学 Fusion method of infrared image and visible light image
CN113421200A (en) * 2021-06-23 2021-09-21 中国矿业大学(北京) Image fusion method based on multi-scale transformation and pulse coupling neural network
CN114004775A (en) * 2021-11-30 2022-02-01 四川大学 Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network
WO2022042049A1 (en) * 2020-08-31 2022-03-03 华为技术有限公司 Image fusion method, and training method and apparatus for image fusion model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN108052988A (en) * 2018-01-04 2018-05-18 常州工学院 Guiding conspicuousness image interfusion method based on wavelet transformation
CN108122219A (en) * 2017-11-30 2018-06-05 西北工业大学 Infrared and visible light image fusion method based on joint sparse and non-negative sparse
CN108537264A (en) * 2018-03-30 2018-09-14 西安电子科技大学 Heterologous image matching method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN108122219A (en) * 2017-11-30 2018-06-05 西北工业大学 Infrared and visible light image fusion method based on joint sparse and non-negative sparse
CN108052988A (en) * 2018-01-04 2018-05-18 常州工学院 Guiding conspicuousness image interfusion method based on wavelet transformation
CN108537264A (en) * 2018-03-30 2018-09-14 西安电子科技大学 Heterologous image matching method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUI LI 等: "Infrared and Visible Image Fusion using a Deep Learning Framework", 《2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)》 *
ZHETONG LIANG 等: "A Hybrid l1-l0 Layer Decomposition Model for Tone Mapping", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
傅志中 等: "基于视觉显著性和NSCT的红外与可见光图像融合", 《电子科技大学学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097617A (en) * 2019-04-25 2019-08-06 北京理工大学 Image interfusion method based on convolutional neural networks Yu conspicuousness weight
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110189286A (en) * 2019-05-30 2019-08-30 兰州交通大学 A kind of infrared and visible light image fusion method based on ResNet
CN110335225A (en) * 2019-07-10 2019-10-15 四川长虹电子系统有限公司 The method of infrared light image and visual image fusion
CN110335225B (en) * 2019-07-10 2022-12-16 四川长虹电子系统有限公司 Method for fusing infrared light image and visible light image
CN111179208A (en) * 2019-12-09 2020-05-19 天津大学 Infrared-visible light image fusion method based on saliency map and convolutional neural network
CN111179208B (en) * 2019-12-09 2023-12-08 天津大学 Infrared-visible light image fusion method based on saliency map and convolutional neural network
WO2022042049A1 (en) * 2020-08-31 2022-03-03 华为技术有限公司 Image fusion method, and training method and apparatus for image fusion model
CN112232403A (en) * 2020-10-13 2021-01-15 四川轻化工大学 Fusion method of infrared image and visible light image
CN113421200A (en) * 2021-06-23 2021-09-21 中国矿业大学(北京) Image fusion method based on multi-scale transformation and pulse coupling neural network
CN114004775A (en) * 2021-11-30 2022-02-01 四川大学 Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network

Similar Documents

Publication Publication Date Title
CN109360175A (en) A kind of infrared image interfusion method with visible light
CN104809734B (en) A method of the infrared image based on guiding filtering and visual image fusion
CN102881011B (en) Region-segmentation-based portrait illumination transfer method
CN103996198B (en) The detection method of area-of-interest under Complex Natural Environment
CN107749052A (en) Image defogging method and system based on deep learning neutral net
CN106462771A (en) 3D image significance detection method
CN108122219B (en) Infrared and visible light image fusion method based on joint sparse and non-negative sparse
CN106327459A (en) Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN104361574B (en) No-reference color image quality assessment method on basis of sparse representation
CN103295241A (en) Frequency domain significance target detection method based on Gabor wavelet
CN108805866A (en) The image method for viewing points detecting known based on quaternion wavelet transformed depth visual sense
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
Casanova et al. Texture analysis using fractal descriptors estimated by the mutual interference of color channels
CN107371016A (en) Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods
CN106886986A (en) Image interfusion method based on the study of self adaptation group structure sparse dictionary
CN105590096A (en) Human motion recognition feature expression method based on depth mapping
US20140241627A1 (en) Environment evaluation apparatus, method and program
CN108363964A (en) A kind of pretreated wrinkle of skin appraisal procedure and system
Riegler et al. Anatomical landmark detection in medical applications driven by synthetic data
CN110197156A (en) Manpower movement and the shape similarity metric method and device of single image based on deep learning
CN103198456B (en) Remote sensing image fusion method based on directionlet domain hidden Markov tree (HMT) model
Kekre et al. Implementation and comparison of different transform techniques using Kekre's wavelet transform for image fusion
CN105809650A (en) Bidirectional iteration optimization based image integrating method
CN107680070B (en) Hierarchical weight image fusion method based on original image content
CN106296749A (en) RGB D image eigen decomposition method based on L1 norm constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190219

RJ01 Rejection of invention patent application after publication