CN110349117A - A kind of infrared image and visible light image fusion method, device and storage medium - Google Patents

A kind of infrared image and visible light image fusion method, device and storage medium Download PDF

Info

Publication number
CN110349117A
CN110349117A CN201910579632.0A CN201910579632A CN110349117A CN 110349117 A CN110349117 A CN 110349117A CN 201910579632 A CN201910579632 A CN 201910579632A CN 110349117 A CN110349117 A CN 110349117A
Authority
CN
China
Prior art keywords
image
visible light
infrared
formula
wolf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910579632.0A
Other languages
Chinese (zh)
Other versions
CN110349117B (en
Inventor
冯鑫
胡开群
袁毅
陈希瑞
张建华
翟治芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Technology and Business University
Original Assignee
Chongqing Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Technology and Business University filed Critical Chongqing Technology and Business University
Priority to CN201910579632.0A priority Critical patent/CN110349117B/en
Publication of CN110349117A publication Critical patent/CN110349117A/en
Application granted granted Critical
Publication of CN110349117B publication Critical patent/CN110349117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of infrared image and visible light image fusion method, device and storage medium, and method includes carrying out Difference Calculation to infrared and visible images, obtaining difference image;Decomposition computation is carried out to infrared image, visible images and difference image respectively according to Total Variation, respectively obtains the cartoon texture component ingredient of each image;Construct the fitness function of wolf pack Optimized Iterative algorithm;Weight term and weight coefficient are determined in the components of each decomposition, and is weighted, and image to be fused is obtained according to calculated result.Source images and difference image are decomposed into cartoon texture component ingredient, weight term and weight coefficient are determined from source images and components by wolf pack Optimized Iterative algorithm, determining weight term and weight coefficient are weighted combination, to obtain final blending image result, fusion results are also able to maintain complete profile information and detailed information while with noise robustness, and clarity and contrast are also relatively high.

Description

Infrared image and visible light image fusion method and device and storage medium
Technical Field
The invention mainly relates to the technical field of image processing, in particular to a method and a device for fusing an infrared image and a visible light image and a storage medium.
Background
The infrared sensor has certain defects on the real scene reflection, and the resolution ratio and the signal-to-noise ratio of the formed image are low; the visible light sensor can clearly reflect the detailed information of a scene under a certain condition, and the imaging is easily influenced by natural conditions such as illumination, weather and the like. According to the complementarity of the source image and the target image, the characteristic information of the source image can be mined by using an image fusion method, so that the target information is highlighted, the understanding of a visual system on scene information is improved, and the purposes of identifying camouflage, night vision and the like are achieved. The research on the infrared and visible light image fusion is helpful to promote the development and the perfection of the image fusion theory, and the research result not only has a certain reference function on the image fusion in other fields, but also has important significance on national defense safety and national construction of China. The system is successfully applied to systems of tracking and positioning, fire early warning, package safety inspection, automobile night driving and the like in the civil field. In the military field, more accurate and reliable target information and comprehensive scene information can be acquired through infrared and visible light image fusion, targets can still be successfully captured under severe meteorological conditions, for example, the infrared visible light double-wave sniping sighting telescope can assist in achieving accurate striking of the targets under various severe environments, and all-weather combat capability of military is improved.
At present, infrared image and visible light image fusion methods are divided into two categories based on multi-scale analysis and sparse representation methods, the two methods easily cause loss of detail information, and the multi-scale method has reconstruction steps, so that artifacts easily appear in the result, and the later recognition is influenced.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a method, a device and a storage medium for fusing an infrared image and a visible light image.
The technical scheme for solving the technical problems is as follows: a method for fusing an infrared image and a visible light image comprises the following steps:
carrying out differential calculation on the infrared image and the visible light image to obtain an infrared and visible light differential image;
and respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component.
And constructing a fitness function of the wolf pack optimization iterative algorithm. Specifically, the construction is performed according to the fusion index information entropy, the standard deviation and the edge retention.
Determining a weight item and a corresponding weight coefficient in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, wherein the weight item and the corresponding weight coefficient are used as the weight item and the weight coefficient of a fusion image component, and the fusion image is an image obtained by combining the infrared image and the visible light image.
And directly carrying out weighting calculation according to the determined weight item and weight coefficient to obtain the fused image.
Another technical solution of the present invention for solving the above technical problems is as follows: an infrared image and visible light image fusion device, comprising:
and the difference processing module is used for carrying out difference calculation on the infrared image and the visible light image to obtain an infrared and visible light difference image.
And the decomposition module is used for respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component.
And the function construction module is used for constructing a fitness function of the wolf pack optimization iterative algorithm.
And the weight determining module is used for determining a weight item and a weight coefficient corresponding to the weight item in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, and the weight item and the weight coefficient are used as a weight item and a weight coefficient of a fusion image component, wherein the fusion image is an image obtained by combining the infrared image and the visible light image.
And the fusion module is used for carrying out weighting calculation according to the determined weight item and the weight coefficient to obtain the fusion image.
Another technical solution of the present invention for solving the above technical problems is as follows: an infrared image and visible light image fusion device comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein when the processor executes the computer program, the infrared image and visible light image fusion method is realized.
Another technical solution of the present invention for solving the above technical problems is as follows: a computer-readable storage medium, storing a computer program which, when executed by a processor, implements the infrared image and visible light image fusion method as described above.
The invention has the beneficial effects that: the method comprises the steps of decomposing an infrared source image, a visible light source image and a differential image into cartoon texture component components through a total variation model, determining a weight item and a weight coefficient from the source image and the component components through a wolf colony optimization iterative algorithm, and performing weighted combination on the determined weight item and the determined weight coefficient to obtain a final fusion image result.
Drawings
FIG. 1 is a schematic flow chart of a fusion method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of image decomposition according to an embodiment of the present invention;
fig. 3 is a block diagram of a fusion apparatus according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating the effect of each image component according to an embodiment of the present invention;
FIG. 5 is a comparative graph of experiments provided by examples of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a method for fusing an infrared image and a visible light image includes the following steps:
and carrying out differential calculation on the infrared image and the visible light image to obtain an infrared and visible light differential image.
And respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component.
And constructing a fitness function of the wolf pack optimization iterative algorithm. Specifically, the construction is performed according to the fusion index information entropy, the standard deviation and the edge retention.
Determining a weight item and a weight coefficient corresponding to the weight item in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, wherein the weight item and the weight coefficient are used as the weight item and the weight coefficient of a fusion image component, and the fusion image is an image obtained by combining the infrared image and the visible light image.
And performing weighting calculation according to the determined weight item and the weight coefficient to obtain the fused image.
In the above embodiment, the infrared source image, the visible light source image and the differential image are decomposed into the cartoon texture component through the total variation model, the weight items and the weight coefficients are determined from the source image and the component through the wolf colony optimization iterative algorithm, and the determined weight items and the determined weight coefficients are subjected to weighted combination to obtain the final fused image result, so that the method has strong noise robustness.
Optionally, as an embodiment of the present invention, as shown in fig. 2, the process of performing decomposition calculation on the infrared image, the visible light image, and the infrared and visible light differential image according to a total variation model includes:
the infrared image, the visible light image and the infrared and visible light differential image are all noise-free source images,
when the infrared image is subjected to decomposition calculation, defining the infrared image as follows according to a total variation problem in a total variation model:
Iinf=Tinf+Cinf
wherein, IinfRepresenting an infrared image, TinfRepresenting the texture component, C, of an infrared light imageinfRepresenting the cartoon component of the infrared light image,
when the visible light image is subjected to decomposition calculation, the visible light image is defined as follows according to a total variation problem in a total variation model:
Ivis=Tvis+Cvis
wherein, IvisRepresenting a visible light image, TvisRepresenting the texture component of the visible light differential image, CvisRepresenting the cartoon component of the visible light differential image,
when the infrared and visible light differential image is subjected to decomposition calculation, defining the infrared and visible light differential image as follows according to a total variation problem in a total variation model:
Idif=Tdif+Cdif
wherein, IdifRepresenting a differential image of infrared and visible light, TdifRepresenting the texture component, C, of the infrared and visible differential imagedifRepresenting cartoon component components of the infrared and visible light differential image;
the total variation model is TV-l1A model according to said TV-l when performing decomposition calculation on said infrared image1Model (model)Calculating a minimization function corresponding to the infrared image, wherein a minimization function formula corresponding to the infrared image is represented as a first formula, and the first formula is as follows:
wherein the solution of the first formula is cartoon component of the infrared light image,the total variation regularization term expressed as cartoon component of the infrared image, + lambda Iinf-Cinf||1d Ω is denoted as a fidelity term, λ is a regularization parameter,
performing decomposition calculation on the visible light image according to the TV-l1The model calculates a minimization function corresponding to the infrared image, and the minimization function formula corresponding to the visible light image is expressed as a second formula which is:
wherein the solution of the second expression is cartoon component of the visible light image,expressed as a total variation regularization term of cartoon component of visible image, lambda Ivis-Cvis||1d Ω is denoted as a fidelity term, λ is a regularization parameter,
according to the TV-l when the infrared and visible light differential image is decomposed and calculated1The model calculates a minimization function corresponding to the infrared image, and the minimization function formula of the infrared and visible light differential image is expressed as a third formula which is:
wherein the solution of the third formula is infraredAnd the cartoon component of the visible light image,expressed as a total variation regularization term of cartoon component components of infrared and visible light images, lambda Idif-Cdif||1d Ω is expressed as a fidelity term, and λ is expressed as a regularization parameter;
when the infrared image is decomposed and calculated, calculating the texture component of the infrared image according to a fourth formula, wherein the fourth formula is as follows:
Tinf=Iinf-Cinf
when the decomposition calculation is carried out on the visible light image, calculating texture component components of the visible light image according to a fifth formula, wherein the fifth formula is as follows:
Tvis=Ivis-Cvis
when the infrared and visible light differential image is decomposed and calculated, calculating texture component components of the infrared and visible light differential image according to a sixth formula, wherein the sixth formula is as follows:
Tdif=Idif-Cdif
respectively solving the optimization problem of the minimization function formula of the infrared image, the minimization function formula of the visible light image and the minimization function formula of the infrared and visible light differential image according to a gradient descent method:
wherein (i, j) represents the position and parameter of pixel point in the infrared light image or the visible light image or the infrared and visible light differential imageAndthe difference between forward and backward is shown separately,representing the magnitude of the gradient, n the number of iterations, am and an are the distances on the image grid, at represents the amount of time variation,epsilon is set to a minimum value.
In the embodiment, the total variation model is adopted for decomposition, the total variation model has certain noise robustness, the fidelity term is used for forcing cartoon component components to be kept close to an original image, the regularization parameter enables the total variation regularization term and the fidelity term to be balanced, texture details can be better extracted, the fusion result is to have higher edge retention degree while the best detail information is kept, and the fusion quality is improved.
Optionally, as an embodiment of the present invention, the process of the step of constructing the fitness function of the wolf pack optimization iterative algorithm includes:
assume that the fused image is IFConstructing a fitness function, the fitness function being
S=E(IF)*Std(IF)*Edge(IF),
Wherein, E (I)F) Representing the entropy of the fused image, Std (I)F) Representing the standard deviation of the fused image, Edge (I)F) Indicating the degree of edge preservation of the fused image.
The process of calculating the entropy comprises:
the formula for calculating the entropy is:
wherein p isiRepresenting the probability distribution of the image pixels.
The process of calculating the standard deviation comprises:
the formula for calculating the standard deviation is:
wherein M, N denotes the image size;
the process of calculating the edge retention includes:
respectively calculating the edge intensity and the direction of the infrared image and the visible light image according to a sobel edge operator, wherein the formula for calculating the edge intensity is as follows:
the formula for calculating the direction is:
wherein i and j represent directions, GiAnd GjRepresenting gradients in the i and j directions, respectively.
Calculating relative edge intensities and relative directions of the hypothetical fused image with respect to the infrared image and the visible light image, the formula for calculating the relative edge intensities being:
the formula for calculating the relative direction is:
calculating the degree of retention of the relative edge strength and the degree of retention of the relative direction, wherein the formula for calculating the degree of retention of the relative edge strength is as follows:
a formula for calculating the degree of retention of the relative direction:
define the total edge retention as:
calculating the total edge information as:
wherein, Гσ、Гθ、Kσ、Kθ、δσAnd deltaθIs constant, Гσ=0.994,Гθ=0.9879,Kσ=-15、Kθ=-22,δσ=0.5、δθ=0.8。
In the above embodiments, the information entropy and the standard deviation are mainly used for measuring the image information amount and the contrast, and the edge similarity is mainly used for evaluating the integrity of the edge structure information of the fusion result. The three indexes are used for defining a fitness function, so that the detail information content and the edge contour retention of the multi-source image fusion result are improved, and the definition and the contrast of the fusion result are improved.
Optionally, as an embodiment of the present invention, as shown in fig. 2, the process of using the determined weight term as a weight term of the image component to be fused includes:
s1: initializing a wolf group: setting the hunting area as Nxd European space, where N is wolf group number, d is variable dimension, and defining maximum iteration number KmaxThe maximum number of seeks is TmaxThe dimension of the variable includes w1、w2、w3、w4And w55 weight coefficients; specifically, d is 4;
s2: searching: calculating the fitness function value of each wolf according to the fitness function, taking the wolf corresponding to the maximum fitness function value as a head wolf, setting the wolf corresponding to the maximum fitness function value as a probing wolf in the remaining fitness function except the head wolf, and iterating through a probing formula, wherein the probing formula is as follows:
stopping iteration until the fitness function value of the wolf exploring is larger than that of the wolf exploring, or meeting the exploration times TmaxStopping the iteration, wherein xidRepresents the detecting wolf in d-dimensional space position, p is the moving direction of the detecting wolf,representing a d-dimensional space search step length;
s3: prey attack: randomly selecting wolfs except the head wolf as murder wolfs, and calculating according to a prey attack formula:
wherein,in order to attack the step size,represents the k +1 generation leader position,
the position of the wolf head is XLThe fitness function value is YLWu Jue wolf YiGreater than wolf YLLet Yi=YLPerforming a calling action; daochiang wolf YiLess than wolf YLAttack is continued until dis≤dnearOutputting fitness function value Y of each wolfLDetermining a weight term and a weight coefficient which are used as image component components to be fused in each output fitness function value, wherein the weight term comprises an infrared light image cartoon component CinfInfrared light image texture component TinfCartoon component C of visible light imagevisVisible image texture component TvisCartoon component C of infrared and visible light differential imagedif
Before further output, the method comprises the following updating steps:
and updating the leader position and the optimal target according to the principle that the winner is the king. The winner is the king, the bailer wolf runs to the position of the head wolf under the guidance of the head wolf, if the target fitness after running is larger than the current target fitness, the target fitness replaces the current target fitness, and otherwise, the target fitness is not changed. In the running process of the attack wolf, if the target fitness of a certain position is greater than the function value of the head wolf, the attack wolf is changed into the head wolf, and other wolfs are called to approach the position of the attack wolf.
Then, updating the wolf group according to the principle of 'winning or losing'. The advantages and disadvantages are as follows: after each iteration, the m-head wolfs with the worst objective function values are selected and eliminated, and then the m-head wolfs are randomly generated according to a formula for initializing wolf cluster positions.
In the embodiment, the key combination information, namely the weight coefficient corresponding to the weight term, is found through the wolf pack optimization iterative algorithm, so that the fusion precision is improved, and the problem of contradiction between keeping a complete edge contour and keeping as much texture detail information as possible when the infrared and visible light images are fused in the prior art is solved.
Optionally, as an embodiment of the present invention, as shown in fig. 2, the performing a weighting calculation on the determined weight terms and weight coefficients includes:
let the fusion image E be IFThe calculation is performed according to the following weighted combination formula:
IF=w1*Cinf+w2*Tinf+w3*Cvis+w4*Tvis+w5*Cdif
wherein, w1、w2、w3、w4And w5All are expressed as weight coefficients, and the value range of the weight coefficients is 0 to 1.
In the above embodiment, the final fusion result is obtained by a direct weighted combination mode, which is different from the current mainstream multi-scale fusion method that a reconstruction step is required, and the reconstruction step is easy to generate artifacts and is not beneficial to later-stage identification.
Optionally, as an embodiment of the present invention, the differential calculation for the infrared image and the visible light image includes:
carrying out difference calculation on the infrared image and the visible light image according to a difference formula
Idif=Iinf-Ivis
Wherein, IdifExpressed as a differential image of infrared and visible light, IinfExpressed as an infrared image, IvisRepresented as a visible light image.
In the above embodiment, since the infrared light image contains extra edge profile information as opposed to the visible light image due to the infrared sensor characteristics, the visible light image is subtracted from the infrared light image to obtain extra features or regions that are not present in the source visible light image, and the same components are retained to aid in the overall fusion quality.
Optionally, as an embodiment of the present invention, as shown in fig. 3, an infrared image and visible light image fusion apparatus includes:
and the difference processing module is used for carrying out difference calculation on the infrared image and the visible light image to obtain an infrared and visible light difference image.
And the decomposition module is used for respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component.
The function construction module is used for constructing a fitness function of the wolf pack optimization iterative algorithm;
and the weight determining module is used for determining a weight item and a weight coefficient corresponding to the weight item in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, and taking the determined weight item and the weight coefficient as the weight item and the weight coefficient of a fusion image component, wherein the fusion image is an image obtained by combining the infrared image and the visible light image.
And the fusion module is used for carrying out weighting calculation on the determined weight item and the weight coefficient and obtaining the fusion image according to the calculation result.
Optionally, as another embodiment of the present invention, an infrared image and visible light image fusion apparatus includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the infrared image and visible light image fusion method as described above is implemented.
Alternatively, as another embodiment of the present invention, a computer-readable storage medium stores a computer program which, when executed by a processor, implements the infrared image and visible light image fusion method as described above.
As shown in fig. 4, the reference numerals in fig. 4 denote (a) a source infrared cartoon component; (B) a source infrared texture component; (C) a source visible light cartoon component; (D) a source visible texture component; (F) a difference cartoon component; (E) a differential texture component;
obviously, the cartoon components after the total variation decomposition have rough outline information, and the detail information of the texture components is very clear. Based on this, cartoon and texture components after infrared and visible image decomposition are selected in the final weighted combination. In order to extract the difference characteristic information between the infrared image and the visible light image, the difference image is obtained, and as can be seen from the graph (E), the cartoon component of the difference image mainly reflects the difference contour edge information between the two source images, and the contour edges are very important information in the image fusion process. Therefore, the final weighting components mainly include: the infrared light image cartoon component, the infrared light image texture component, the visible light image cartoon component, the visible light image texture component and the difference image cartoon component respectively correspond to the w1、w2、w3、w4And w55 weight coefficients.
As shown in fig. 5, each reference numeral in fig. 4 denotes (a) a visible light image; (b) an infrared light image; (c) NSCT method; (d) the Shearlet method; (e) an SR method; (f) a TV variational multiscale analysis method; (g) the invention provides a fusion method.
From a subjective visual point of view, the NSCT method is slightly better at edge retention than the Shearlet method due to its translational invariance; although the SR method can extract the spatial detail information in the source image, the edges of the character areas in the scene are still fuzzy, and the contrast is low; the variational multi-scale analysis method adopts variational multi-scale decomposition and adopts guide filtering to select texture information, the obtained fusion result has relatively high edge retention and contrast, and the texture information is clearer compared with the previous methods; the method adopts a wolf colony algorithm to optimize the combination weight of each texture component and the cartoon component to obtain the high-quality contrast and edge details, the contrast and edge detail information is slightly higher than that of a variation multi-scale analysis method, and the subjective visual effect is best.
The following table is an evaluation index data table:
and introducing objectivity indexes into results of various fusion algorithms for evaluation. And 4, selecting four common image fusion performance indexes to evaluate objective quality of results of the fusion methods. The fusion indexes are respectively information theory evaluation indexes QMIHuman visual sensitivity evaluation index QCBImage structure similarity evaluation index QYGradient characteristic evaluation index QG. Mutual information evaluation index QMIThe method is used for measuring the degree of correlation between the two images, and is used for measuring the information content of the source images in the final fusion result; gradient index QGGradient information used for measuring the transmission of the infrared and visible light images of the source to a final fusion result; structural similarity evaluation index QYFor measuring the degree to which the fusion result is retained in terms of structural information; visual sensitivity index QCBTaking the global quality map mean.
According to the method, the infrared source image, the visible light source image and the differential image are decomposed into cartoon component components and texture component components through the total variation model, the weight items and the weight coefficients are determined from the source image and the component components through the wolf pack optimization iterative algorithm, the determined weight items and the weight coefficients are subjected to weighted combination to obtain a final fusion image result, the fusion result has noise robustness, meanwhile, complete contour information and detail information can be kept, and the definition and the contrast are high.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A method for fusing an infrared image and a visible light image is characterized by comprising the following steps:
carrying out differential calculation on the infrared image and the visible light image to obtain an infrared and visible light differential image;
respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component;
constructing a fitness function of a wolf pack optimization iterative algorithm;
determining a weight item and a corresponding weight coefficient in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, wherein the weight item and the corresponding weight coefficient are used as the weight item and the weight coefficient of a fusion image component, and the fusion image is an image obtained by combining the infrared image and the visible light image;
and performing weighting calculation according to the determined weight item and the weight coefficient to obtain the fused image.
2. The method according to claim 1, wherein the process of performing decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model comprises:
when the infrared image is subjected to decomposition calculation, defining the infrared image as follows according to a total variation problem in a total variation model:
Iinf=Tinf+Cinf
wherein, IinfRepresenting an infrared image, TinfRepresenting the texture component, C, of an infrared light imageinfRepresenting the cartoon component of the infrared light image,
when the visible light image is subjected to decomposition calculation, the visible light image is defined as follows according to a total variation problem in a total variation model:
Ivis=Tvis+Cvis
wherein, IvisRepresenting a visible light image, TvisRepresenting the texture component of the visible light differential image, CvisRepresenting the cartoon component of the visible light differential image,
when the infrared and visible light differential image is subjected to decomposition calculation, defining the infrared and visible light differential image as follows according to a total variation problem in a total variation model:
Idif=Tdif+Cdif
wherein, IdifRepresenting a differential image of infrared and visible light, TdifRepresenting the texture component, C, of the infrared and visible differential imagedifRepresenting cartoon component components of the infrared and visible light differential image;
the total variation model is TV-l1A model according to said TV-l when performing decomposition calculation on said infrared image1Model calculating a minimization function corresponding to the infrared image, wherein the infrared imageThe corresponding minimization function is represented as a first equation:
wherein the solution of the first type is a cartoon component | (C) of the infrared light imageinfI is expressed as a total variation regularization term of cartoon component of infrared image, and lambda Iinf-Cinf||1d Ω is denoted as a fidelity term, λ is a regularization parameter,
performing decomposition calculation on the visible light image according to the TV-l1The model calculates a minimization function corresponding to the infrared image, and the minimization function formula corresponding to the visible light image is expressed as a second formula which is:
wherein the solution of the second formula is a cartoon component | (C) of the visible light imagevisI is expressed as a total variation regularization term of cartoon component of visible light image, and lambda Ivis-Cvis||1d Ω is denoted as a fidelity term, λ is a regularization parameter,
according to the TV-l when the infrared and visible light differential image is decomposed and calculated1The model calculates a minimization function corresponding to the infrared image, and the minimization function formula of the infrared and visible light differential image is expressed as a third formula which is:
wherein the solution of the third formula is a cartoon component | (C) of the infrared and visible light imagedifI is expressed as a total variation regularization term of cartoon component of infrared and visible light images, and lambda Idif-Cdif||1d Ω is expressed as a fidelity term, and λ is expressed as a regularization parameter;
when the infrared image is decomposed and calculated, calculating the texture component of the infrared image according to a fourth formula, wherein the fourth formula is as follows:
Tinf=Iinf-Cinf
when the decomposition calculation is carried out on the visible light image, calculating texture component components of the visible light image according to a fifth formula, wherein the fifth formula is as follows:
Tvis=Ivis-Cvis
when the infrared and visible light differential image is decomposed and calculated, calculating texture component components of the infrared and visible light differential image according to a sixth formula, wherein the sixth formula is as follows:
Tdif=Idif-Cdif
respectively solving the optimization problem of the minimization function formula of the infrared image, the minimization function formula of the visible light image and the minimization function formula of the infrared and visible light differential image according to a gradient descent method:
wherein (i, j) represents the position of a pixel point in the infrared light image or the visible light image or the infrared and visible light differential image, parameters ++ and ^ -represent the difference forward and backward, # C, respectivelyijRepresenting the magnitude of the gradient, n the number of iterations, am and an are the distances on the image grid, at represents the amount of time variation,epsilon is set to a minimum value.
3. The method of claim 1, wherein the process of constructing the fitness function of the wolf pack optimization iterative algorithm comprises:
assume that the fused image is IFConstructing a fitness function, saidThe fitness function is:
S=E(IF)*Std(IF)*Edge(IF),
wherein, E (I)F) Representing the entropy of the fused image, Std (I)F) Representing the standard deviation of the fused image, Edge (I)F) Representing the edge preservation of the fused image;
the process of calculating the entropy comprises:
the formula for calculating the entropy is:
wherein p isiRepresenting a probability distribution of image pixels;
the process of calculating the standard deviation comprises:
the formula for calculating the standard deviation is:
wherein M, N denotes the image size;
the process of calculating the edge retention includes:
respectively calculating the edge intensity and the direction of the infrared image and the visible light image according to a sobel edge operator, wherein the formula for calculating the edge intensity is as follows:
the formula for calculating the direction is:
wherein i and j represent directions, GiAnd GjRepresent gradients in the i and j directions, respectively;
calculating relative edge intensities and relative directions of the hypothetical fused image with respect to the infrared image and the visible light image, the formula for calculating the relative edge intensities being:
the formula for calculating the relative direction is:
calculating the degree of retention of the relative edge strength and the degree of retention of the relative direction, wherein the formula for calculating the degree of retention of the relative edge strength is as follows:
a formula for calculating the degree of retention of the relative direction:
define the total edge retention as:
calculating the total edge information as:
wherein, Гσ、Гθ、Kσ、Kθ、δσAnd deltaθIs constant, Гσ=0.994,Гθ=0.9879,Kσ=-15、Kθ=-22,δσ=0.5、δθ=0.8。
4. The method for fusing the infrared image and the visible light image as claimed in claim 3, wherein the process of using the determined weight term as the weight term of the image component to be fused comprises:
setting the hunting area as Nxd European space, where N is wolf group number, d is variable dimension, and defining maximum iteration number KmaxThe maximum number of seeks is TmaxThe dimension of the variable includes w1、w2、w3、w4And w55 weight coefficients;
calculating the fitness function value of each wolf according to the fitness function, taking the wolf corresponding to the maximum fitness function value as a head wolf, setting the wolf corresponding to the maximum fitness function value as a probing wolf in the remaining fitness function except the head wolf, and iterating through a probing formula, wherein the probing formula is as follows:
stopping iteration until the fitness function value of the wolf exploring is larger than that of the wolf exploring, or meeting the exploration times TmaxStopping the iteration, wherein xidRepresents the detecting wolf in d-dimensional space position, p is the moving direction of the detecting wolf,representing a d-dimensional space search step length;
randomly selecting wolfs except the head wolf as murder wolfs, and calculating according to a prey attack formula:
wherein,in order to attack the step size,represents the k +1 generation leader position,
the position of the wolf head is XLLetter of fitnessA value of YLWu Jue wolf YiGreater than wolf YLLet Yi=YLPerforming a calling action; daochiang wolf YiLess than wolf YLAttack is continued until dis≤dnearOutputting fitness function value Y of each wolfLDetermining a weight term and a weight coefficient which are used as image component components to be fused in each output fitness function value, wherein the weight term comprises an infrared light image cartoon component CinfInfrared light image texture component TinfCartoon component C of visible light imagevisVisible image texture component TvisCartoon component C of infrared and visible light differential imagedif
5. The method for fusing the infrared image and the visible light image as claimed in claim 3, wherein the process of performing the weighted calculation on the determined weight terms and weight coefficients comprises:
let the fused image be IFThe calculation is performed according to the following weighted combination formula:
IF=w1*Cinf+w2*Tinf+w3*Cvis+w4*Tvis+w5*Cdif
wherein, w1、w2、w3、w4And w5All are expressed as weight coefficients, and the value range of the weight coefficients is 0 to 1.
6. The method for fusing the infrared image and the visible light image according to claim 1, wherein the process of performing the differential calculation on the infrared image and the visible light image comprises:
carrying out difference calculation on the infrared image and the visible light image according to a difference formula, wherein the difference formula is as follows:
Idif=Iinf-Ivis
wherein, IdifExpressed as a differential image of infrared and visible light, IinfExpressed as infraredImage, IvisRepresented as a visible light image.
7. An infrared image and visible light image fusion device is characterized by comprising:
the difference processing module is used for carrying out difference calculation on the infrared image and the visible light image to obtain an infrared and visible light difference image;
the decomposition module is used for respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component;
the function construction module is used for constructing a fitness function of the wolf pack optimization iterative algorithm;
a weight determining module, configured to determine, according to the wolf pack optimization iterative algorithm and the constructed fitness function, a weight item and a weight coefficient corresponding to the weight item in the infrared image cartoon texture component, the visible light image cartoon texture component, and the difference image cartoon texture component, where the weight item and the weight coefficient are used as a weight item and a weight coefficient of a fused image component, and the fused image is an image obtained by combining the infrared image and the visible light image;
and the fusion module is used for carrying out weighting calculation on the determined weight item and the weight coefficient and obtaining the fusion image according to the calculation result.
8. An infrared image and visible light image fusion device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that when the computer program is executed by the processor, the infrared image and visible light image fusion method according to any one of claims 1 to 6 is implemented.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method for fusing an infrared image and a visible light image according to any one of claims 1 to 6.
CN201910579632.0A 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium Active CN110349117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579632.0A CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579632.0A CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110349117A true CN110349117A (en) 2019-10-18
CN110349117B CN110349117B (en) 2023-02-28

Family

ID=68177318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579632.0A Active CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110349117B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223049A (en) * 2020-01-07 2020-06-02 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN111353966A (en) * 2020-03-03 2020-06-30 西华大学 Image fusion method based on total variation deep learning and application and system thereof
CN111680752A (en) * 2020-06-09 2020-09-18 重庆工商大学 Infrared and visible light image fusion method based on Framelet framework
CN113139893A (en) * 2020-01-20 2021-07-20 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN114143420A (en) * 2020-09-04 2022-03-04 聚晶半导体股份有限公司 Double-sensor camera system and privacy protection camera method thereof
CN116485694A (en) * 2023-04-25 2023-07-25 中国矿业大学 Infrared and visible light image fusion method and system based on variation principle
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117612093A (en) * 2023-11-27 2024-02-27 北京东青互联科技有限公司 Dynamic environment monitoring method, system, equipment and medium for data center

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
US20180227509A1 (en) * 2015-08-05 2018-08-09 Wuhan Guide Infrared Co., Ltd. Visible light image and infrared image fusion processing system and fusion method
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
冯鑫等: "基于变分多尺度的红外与可见光图像融合", 《电子学报》 *
常莉红: "一种基于特征分解的图像融合方法", 《浙江大学学报(理学版)》 *
沈瑜等: "基于Tetrolet变换的红外与可见光融合", 《光谱学与光谱分析》 *
邓苗等: "基于全变分的权值优化的多尺度变换图像融合", 《电子与信息学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223049B (en) * 2020-01-07 2021-10-22 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN111223049A (en) * 2020-01-07 2020-06-02 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN113139893A (en) * 2020-01-20 2021-07-20 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN113139893B (en) * 2020-01-20 2023-10-03 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN111353966B (en) * 2020-03-03 2024-02-09 南京一粹信息科技有限公司 Image fusion method based on total variation deep learning and application and system thereof
CN111353966A (en) * 2020-03-03 2020-06-30 西华大学 Image fusion method based on total variation deep learning and application and system thereof
CN111680752A (en) * 2020-06-09 2020-09-18 重庆工商大学 Infrared and visible light image fusion method based on Framelet framework
CN114143420A (en) * 2020-09-04 2022-03-04 聚晶半导体股份有限公司 Double-sensor camera system and privacy protection camera method thereof
CN114143420B (en) * 2020-09-04 2024-05-03 聚晶半导体股份有限公司 Dual-sensor camera system and privacy protection camera method thereof
CN116485694A (en) * 2023-04-25 2023-07-25 中国矿业大学 Infrared and visible light image fusion method and system based on variation principle
CN116485694B (en) * 2023-04-25 2023-11-07 中国矿业大学 Infrared and visible light image fusion method and system based on variation principle
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117218048B (en) * 2023-11-07 2024-03-08 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117612093A (en) * 2023-11-27 2024-02-27 北京东青互联科技有限公司 Dynamic environment monitoring method, system, equipment and medium for data center

Also Published As

Publication number Publication date
CN110349117B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN110349117B (en) Infrared image and visible light image fusion method and device and storage medium
Elhoseny et al. Optimal bilateral filter and convolutional neural network based denoising method of medical image measurements
Zhu et al. A fast single image haze removal algorithm using color attenuation prior
Wang et al. A random forest classifier based on pixel comparison features for urban LiDAR data
CN109934815B (en) Tensor recovery infrared small target detection method combined with ATV constraint
CN106897986B (en) A kind of visible images based on multiscale analysis and far infrared image interfusion method
CN106530266B (en) A kind of infrared and visible light image fusion method based on region rarefaction representation
CN105701481B (en) A kind of collapsed building extracting method
CN106096604A (en) Multi-spectrum fusion detection method based on unmanned platform
CN113313702A (en) Aerial image defogging method based on boundary constraint and color correction
CN117392496A (en) Target detection method and system based on infrared and visible light image fusion
CN110766657A (en) Laser interference image quality evaluation method
CN115587946A (en) Remote sensing image defogging method based on multi-scale network
CN111062954B (en) Infrared image segmentation method, device and equipment based on difference information statistics
CN115082780A (en) Multi-source heterogeneous image change detection method based on incremental difference learning network
Kurmi et al. Pose error reduction for focus enhancement in thermal synthetic aperture visualization
CN111460943A (en) Remote sensing image ground object classification method and system
Sebastianelli et al. A speckle filter for Sentinel-1 SAR ground range detected data based on residual convolutional neural networks
CN116883303A (en) Infrared and visible light image fusion method based on characteristic difference compensation and fusion
CN115187688A (en) Fog map reconstruction method based on atmospheric light polarization orthogonal blind separation and electronic equipment
CN113379658B (en) Unmanned aerial vehicle observation target feature double-light fusion method and system
CN110084748A (en) A kind of infrared and visible light image fusion method based on total variational
Wang et al. [Retracted] Adaptive Enhancement Algorithm of High‐Resolution Satellite Image Based on Feature Fusion
Li et al. Effects of image fusion algorithms on classification accuracy
JP6362839B2 (en) Material subtraction in scenes based on hyperspectral characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant