CN110349117A - A kind of infrared image and visible light image fusion method, device and storage medium - Google Patents

A kind of infrared image and visible light image fusion method, device and storage medium Download PDF

Info

Publication number
CN110349117A
CN110349117A CN201910579632.0A CN201910579632A CN110349117A CN 110349117 A CN110349117 A CN 110349117A CN 201910579632 A CN201910579632 A CN 201910579632A CN 110349117 A CN110349117 A CN 110349117A
Authority
CN
China
Prior art keywords
image
infrared
formula
visible light
wolf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910579632.0A
Other languages
Chinese (zh)
Other versions
CN110349117B (en
Inventor
冯鑫
胡开群
袁毅
陈希瑞
张建华
翟治芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Technology and Business University
Original Assignee
Chongqing Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Technology and Business University filed Critical Chongqing Technology and Business University
Priority to CN201910579632.0A priority Critical patent/CN110349117B/en
Publication of CN110349117A publication Critical patent/CN110349117A/en
Application granted granted Critical
Publication of CN110349117B publication Critical patent/CN110349117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present invention provides a kind of infrared image and visible light image fusion method, device and storage medium, and method includes carrying out Difference Calculation to infrared and visible images, obtaining difference image;Decomposition computation is carried out to infrared image, visible images and difference image respectively according to Total Variation, respectively obtains the cartoon texture component ingredient of each image;Construct the fitness function of wolf pack Optimized Iterative algorithm;Weight term and weight coefficient are determined in the components of each decomposition, and is weighted, and image to be fused is obtained according to calculated result.Source images and difference image are decomposed into cartoon texture component ingredient, weight term and weight coefficient are determined from source images and components by wolf pack Optimized Iterative algorithm, determining weight term and weight coefficient are weighted combination, to obtain final blending image result, fusion results are also able to maintain complete profile information and detailed information while with noise robustness, and clarity and contrast are also relatively high.

Description

A kind of infrared image and visible light image fusion method, device and storage medium
Technical field
The invention mainly relates to technical field of image processing, and in particular to a kind of infrared image and visual image fusion side Method, device and storage medium.
Background technique
Infrared sensor reflects there are certain defect true scene, at image resolution ratio is low, signal-to-noise ratio is low;It can Light-exposed sensor can clearly reflect the detailed information of scene under certain condition, and imaging is easy by the natural item such as illumination, weather The influence of part.According to their complementarity, the respective characteristic information of source images can be excavated using image interfusion method, and then prominent Sensible heat target information promotes understanding of the vision system to scene information, to reach the purpose of identification camouflage, night vision.It studies red Facilitate to push the development of Image Fusion Theory and perfect with visual image fusion outside, research achievement is not only to other field Image co-registration have certain reference role, also have great significance to the national defense safety and nation-building in China.Civilian Field has been successfully applied to tracking and positioning, fire alarm, has wrapped up in the systems such as safety inspection and vehicle driving at night.And in army Thing field, it is infrared to obtain more accurate reliable target information and comprehensive scene information with visual image fusion, Still be able to successfully capture target under bad weather condition, for example, infrared visible light double wave snipe gun sight can assist realizing it is various The precision strike of target under adverse circumstances promotes the round-the-clock fight capability of army.
Currently, infrared image and visible light image fusion method be divided into it is big based on multiscale analysis and sparse representation method two Class, both methods are easy to cause detailed information to lose, and multi-scale method due to have reconstruct step and easily there is puppet in result Shadow influences the identification in later period.
Summary of the invention
The technical problem to be solved by the present invention is in view of the deficiencies of the prior art, provide a kind of infrared image and visible light Image interfusion method, device and storage medium.
The technical scheme to solve the above technical problems is that a kind of infrared image and visual image fusion side Method includes the following steps:
Difference Calculation is carried out to infrared image and visible images, obtains infrared and visible light difference image;
According to Total Variation respectively to the infrared image, the visible images and described infrared and visible light difference Image carries out decomposition computation, respectively obtains infrared image cartoon texture component ingredient, visible images cartoon texture component ingredient And difference image cartoon texture component ingredient.
Construct the fitness function of wolf pack Optimized Iterative algorithm.Specifically, according to fusion indication information entropy, standard deviation and side Edge conservation degree is constructed.
According to the wolf pack Optimized Iterative algorithm and the fitness function of building in the infrared image cartoon texture component Weight is determined in ingredient, the visible images cartoon texture component ingredient and the difference image cartoon texture component ingredient Item and corresponding weight coefficient, as the weight term and weight coefficient of blending image components, the blending image is described The image that infrared image and the visible images combine.
It is directly weighted according to determining weight term and weight coefficient, obtains the blending image.
Another technical solution that the present invention solves above-mentioned technical problem is as follows: a kind of infrared image and visual image fusion Device, comprising:
Differential processing module obtains infrared and visible light for carrying out Difference Calculation to infrared image and visible images Difference image.
Decomposing module, for according to Total Variation respectively to the infrared image, visible images and described red Decomposition computation is carried out with visible light difference image outside, respectively obtains infrared image cartoon texture component ingredient, visible images card Logical texture component ingredient and difference image cartoon texture component ingredient.
Function constructs module, for constructing the fitness function of wolf pack Optimized Iterative algorithm.
Weight determination module, for according to the fitness function of the wolf pack Optimized Iterative algorithm and building described infrared Image cartoon texture component ingredient, the visible images cartoon texture component ingredient and the difference image cartoon texture point It measures and determines weight term and weight coefficient corresponding with the weight term, the weight term as blending image components in ingredient And weight coefficient, the blending image are the image that the infrared image and the visible images combine.
Fusion Module obtains the blending image for being weighted according to determining weight term and weight coefficient.
Another technical solution that the present invention solves above-mentioned technical problem is as follows: a kind of infrared image and visual image fusion Device including memory, processor and stores the computer journey that can be run in the memory and on the processor Sequence realizes infrared image as described above and visual image fusion side when the processor executes the computer program Method.
Another technical solution that the present invention solves above-mentioned technical problem is as follows: a kind of computer readable storage medium, described Computer-readable recording medium storage has computer program, when the computer program is executed by processor, realizes institute as above The infrared image and visible light image fusion method stated.
The beneficial effects of the present invention are: passing through Total Variation for infrared source images, visible light source image and difference image It is decomposed into cartoon texture component ingredient, weight term and power are determined from source images and components by wolf pack Optimized Iterative algorithm Weight coefficient, is weighted combination for determining weight term and weight coefficient, to obtain final blending image as a result, fusion results Complete profile information and detailed information are also able to maintain while with noise robustness, clarity and contrast also compare It is high.
Detailed description of the invention
Fig. 1 is the flow diagram of fusion method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of picture breakdown provided in an embodiment of the present invention;
Fig. 3 is the module frame chart of fusing device provided in an embodiment of the present invention;
Fig. 4 is the effect picture of each picture content ingredient provided in an embodiment of the present invention;
Fig. 5 is experimental comparison figure provided in an embodiment of the present invention.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and It is non-to be used to limit the scope of the invention.
As shown in Figure 1, a kind of infrared image and visible light image fusion method, include the following steps:
Difference Calculation is carried out to infrared image and visible images, obtains infrared and visible light difference image.
According to Total Variation respectively to the infrared image, the visible images and described infrared and visible light difference Image carries out decomposition computation, respectively obtains infrared image cartoon texture component ingredient, visible images cartoon texture component ingredient And difference image cartoon texture component ingredient.
Construct the fitness function of wolf pack Optimized Iterative algorithm.Specifically, according to fusion indication information entropy, standard deviation and side Edge conservation degree is constructed.
According to the wolf pack Optimized Iterative algorithm and the fitness function of building in the infrared image cartoon texture component Weight is determined in ingredient, the visible images cartoon texture component ingredient and the difference image cartoon texture component ingredient Item and weight coefficient corresponding with the weight term, it is described as the weight term and weight coefficient of blending image components Blending image is the image that the infrared image and the visible images combine.
It is weighted according to determining weight term and weight coefficient, obtains the blending image.
In above-described embodiment, infrared source images, visible light source image and difference image are decomposed by Total Variation Cartoon texture component ingredient determines weight term and weight system by wolf pack Optimized Iterative algorithm from source images and components Number, is weighted combination for determining weight term and weight coefficient, to obtain final blending image as a result, having stronger make an uproar Sound robustness.
Optionally, as an embodiment of the present invention, as shown in Fig. 2, it is described according to Total Variation respectively to described Infrared image, the visible images and the infrared process for carrying out decomposition computation with visible light difference image include:
If the infrared image, the visible images and described infrared and visible light difference image are noiseless source figure Picture,
It, will be described infrared according to the full variational problem in Total Variation when carrying out decomposition computation to the infrared image Image definition are as follows:
Iinf=Tinf+Cinf,
Wherein, IinfIndicate infrared image, TinfIndicate infrared light image texture component ingredient, CinfIndicate infrared light image Cartoon components,
It, can by described according to the full variational problem in Total Variation when carrying out decomposition computation to the visible images Light-exposed image definition are as follows:
Ivis=Tvis+Cvis,
Wherein, IvisIndicate visible images, TvisIndicate visible light difference image texture component ingredient, CvisIndicate visible Equation of light partial image cartoon components,
When to the infrared progress decomposition computation with visible light difference image, asked according to the full variation in Total Variation It inscribes described infrared and visible light difference image is defined as:
Idif=Tdif+Cdif,
Wherein, IdifIndicate infrared and visible light difference image, TdifIndicate infrared and visible light difference image texture component Ingredient, CdifIndicate infrared and visible light difference image cartoon components;
The Total Variation is TV-l1Model, when carrying out decomposition computation to the infrared image, according to the TV-l1 Model calculates the corresponding minimum function of the infrared image, and the corresponding minimum functional expression of the infrared image is expressed as first Formula, first formula are as follows:
Wherein, the solution of the first formula is infrared light image cartoon components,It is expressed as infrared light image cartoon point The full variational regularization item of ingredient is measured ,+λ | | Iinf-Cinf||1D Ω is expressed as fidelity term, and λ indicates regularization parameter,
When carrying out decomposition computation to the visible images, according to the TV-l1Model calculates the infrared image pair The minimum function answered, the corresponding minimum functional expression of the visible images are expressed as the second formula, second formula are as follows:
Wherein, the solution of the second formula is visible images cartoon components,It is expressed as visible images cartoon point The full variational regularization item of ingredient is measured, λ | | Ivis-Cvis||1D Ω is expressed as fidelity term, and λ indicates regularization parameter,
When to the infrared progress decomposition computation with visible light difference image, according to the TV-l1Described in model calculates The corresponding minimum function of infrared image, described infrared and visible light difference image minimize functional expression and are expressed as third formula, institute State third formula are as follows:
Wherein, the solution of third formula is infrared and visible images cartoon components,Be expressed as it is infrared with it is visible The full variational regularization item of light image cartoon components, λ | | Idif-Cdif||1D Ω is expressed as fidelity term, and λ indicates regularization ginseng Number;
To the infrared image carry out decomposition computation when, according to the 4th formula to infrared light image texture component ingredient into Row calculates, the 4th formula are as follows:
Tinf=Iinf-Cinf,
When carrying out decomposition computation to the visible images, according to the 5th formula to visible images texture component ingredient It is calculated, the 5th formula are as follows:
Tvis=Ivis-Cvis,
When to the infrared progress decomposition computation with visible light difference image, according to the 6th formula to infrared and visible light Difference image texture component ingredient is calculated, the 6th formula are as follows:
Tdif=Idif-Cdif,
According to gradient descent method respectively to infrared image minimize functional expression, visible images minimum functional expression with And the infrared optimization problem solving with the minimum functional expression of visible light difference image:
Wherein, (i, j) indicates the infrared light image or the visible images or described infrared and visible light difference diagram The position of pixel, parameter as inWithRespectively indicate forward with difference backward,Indicate gradient magnitude, n is iteration Number, Δ m and Δ n be image lattice at a distance from, Δ t indicate time variation amount,ε is set as minimum.
It in above-described embodiment, is decomposed using Total Variation, Total Variation itself has certain noise robustness Property, and fidelity term, to force cartoon components to keep close to original image, regularization parameter makes full variational regularization item and guarantor True Xiang Pingheng can preferably extract grain details, and fusion results seek to keeping best detailed information while having Higher edge conservation degree, improves fusion mass.
Optionally, as an embodiment of the present invention, the fitness function of the building wolf pack Optimized Iterative algorithm The process of step includes:
Assuming that blending image is IF, fitness function is constructed, the fitness function is
S=E (IF)*Std(IF)*Edge(IF),
Wherein, E (IF) indicate blending image entropy, Std (IF) indicate blending image standard deviation, Edge (IF) indicate to melt Close the edge conservation degree of image.
The process for calculating the entropy includes:
Calculate the formula of the entropy are as follows:
Wherein, piIndicate the probability distribution of image pixel.
The process for calculating the standard deviation includes:
Calculate the formula of the standard deviation are as follows:
Wherein, M, N indicate picture size;
The process for calculating the edge conservation degree includes:
Edge strength and the direction of infrared image and visible images are calculated separately according to sobel boundary operator, calculate institute State the formula of edge strength are as follows:
Calculate the formula in the direction are as follows:
Wherein, i and j indicates direction, GiWith GjIt respectively indicates along i and j direction gradient.
The blending image for calculating the hypothesis is strong relative to the opposite edges of the infrared image and the visible images Degree and relative direction, calculate the formula of the relative rim intensity are as follows:
Calculate the formula of the relative direction are as follows:
The conservation degree of the relative rim intensity and the conservation degree of the relative direction are calculated, it is strong to calculate the opposite edges The formula of the conservation degree of degree are as follows:
Calculate the formula of the conservation degree of the relative direction:
Define total edge conservation degree are as follows:
Calculate total edge information are as follows:
Wherein, Гσ、Гθ、Kσ、Kθ、δσAnd δθFor constant, Гσ=0.994, Гθ=0.9879, Kσ=-15, Kθ=-22, δσ =0.5, δθ=0.8.
In above-described embodiment, comentropy and standard deviation are mainly used for the measurement of amount of image information and contrast, and edge is similar The marginal texture information that degree is mainly used for evaluating fusion results keeps integrity degree.Fitness function is defined with these three indexs, The detailed information amount of multi-source image fusion results and the conservation degree of edge contour are improved, and promotes the clear of fusion results Degree and contrast.
Optionally, as an embodiment of the present invention, as shown in Fig. 2, it is described using determining weight term as to be fused The weight term of picture content ingredient, process include:
S1: wolf pack initialization: setting hunting ground is N × d theorem in Euclid space, and wherein N is wolf pack number, and d is dimension, definition Maximum number of iterations Kmax, it is T that maximum, which seeks number,max, dimension includes w1、w2、w3、w4And w55 weight coefficients;Specifically Ground, d=4;
S2: it seeks: calculating the fitness function value of each wolf according to the fitness function, by maximum adaptation degree letter The corresponding wolf of numerical value is that in remaining fitness function, the corresponding wolf of maximum adaptation degree functional value is set in addition to the head wolf for head wolf It is set to and visits wolf, and be iterated by seeking formula, it is described to seek formula are as follows:
Until the fitness function value for visiting wolf is greater than head wolf, stop iteration, or meet and seek number Tmax, stop Iteration, wherein xidIndicating to visit wolf in d dimension space position, p is to visit wolf moving direction,Indicate that d dimension space seeks step-length;
S3: prey attack: the wolf in addition to the head wolf is randomly selected as fierce wolf, is calculated according to prey attack formula:
Wherein,For attack step-length,Indicate k+1 for head position,
If the position of head wolf is XL, fitness function value YLIf fierce wolf YiGreater than head wolf YL, enable Yi=YL, called Behavior;If fierce wolf YiLess than head wolf YL, continue attack, until dis≤dnear, export the fitness function value Y of each wolfL, defeated The weight term and weight coefficient of picture content ingredient to be fused, the weight term are determined as in each fitness function value out Including infrared light image cartoon components Cinf, infrared light image texture component ingredient Tinf, visible images cartoon component at Divide Cvis, visible images texture component ingredient TvisWith infrared and visible light difference image cartoon components Cdif
Before also exporting, include the steps that updating:
Head position and optimal objective are updated according to " the victor is a king " principle." the victor is a king " is to surround and seize wolf to exist It is run under the guide of header wolf to header wolf position, is greater than currently if running rear target fitness, substitutes current location, it is no Then, constant.Wolf is besieged during running, if being greater than the functional value of header wolf to the target fitness of some position, is besieged Wolf is changed into header wolf, calls other wolves close to oneself position.
Then, wolf pack is updated according to " survival of the fittest " principle." survival of the fittest " are as follows: after each iteration, The worst m head wolf of selection target functional value eliminates, and then the formula according to initialization wolf pack position is generated at random on m wolves.
In above-described embodiment, crucial combined information, the i.e. corresponding power of weight term are found by wolf pack Optimized Iterative algorithm Weight coefficient, improves fusion accuracy, and solution carries out keeping complete edge profile when infrared and visual image fusion in the prior art The problem of with contradiction between texture detail information as much as possible is retained.
Optionally, as an embodiment of the present invention, as shown in Fig. 2, it is described by determining weight term and weight coefficient It is weighted, process includes:
If blending image E is IF, it is calculated according to following weighted array formula:
IF=w1*Cinf+w2*Tinf+w3*Cvis+w4*Tvis+w5*Cdif
Wherein, w1、w2、w3、w4And w5It is represented as weight coefficient, the weight coefficient value range is 0 to 1.
In above-described embodiment, by way of direct weighted array, final fusion results are obtained, are different from current master Stream Multiscale Fusion method needs to be reconstructed step, and reconstructs step and be also easy to produce artifact, is unfavorable for later period identification.
It is optionally, as an embodiment of the present invention, described that Difference Calculation is carried out to infrared image and visible images, Its process includes:
Difference Calculation is carried out to infrared image and visible images according to difference formula, the difference formula is
Idif=Iinf-Ivis,
Wherein, IdifIt is expressed as infrared and visible light difference image, IinfIt is expressed as infrared image, IvisIt is expressed as visible light Image.
In above-described embodiment, infrared light image includes the additional side opposite with visible images due to infrared sensor characteristic Edge profile information, therefore visible images are subtracted in infrared light image to obtain the additional spy being not present in the visible images of source Sign or region, and retain identical ingredient to help overall fusion quality.
Optionally, as an embodiment of the present invention, as shown in figure 3, a kind of infrared image and visual image fusion Device, comprising:
Differential processing module obtains infrared and visible light for carrying out Difference Calculation to infrared image and visible images Difference image.
Decomposing module, for according to Total Variation respectively to the infrared image, visible images and described red Decomposition computation is carried out with visible light difference image outside, respectively obtains infrared image cartoon texture component ingredient, visible images card Logical texture component ingredient and difference image cartoon texture component ingredient.
Function constructs module, for constructing the fitness function of wolf pack Optimized Iterative algorithm;
Weight determination module, for according to the fitness function of the wolf pack Optimized Iterative algorithm and building described infrared Image cartoon texture component ingredient, the visible images cartoon texture component ingredient and the difference image cartoon texture point Measure and determine weight term and weight coefficient corresponding with the weight term in ingredient, using determining weight term and weight coefficient as The weight term and weight coefficient of blending image components, the blending image are the infrared image and the visible images The image combined.
Fusion Module obtains institute according to calculated result for being weighted the weight term determined and weight coefficient State blending image.
Optionally, as another embodiment of the invention, a kind of infrared image and visual image fusion device, including Memory, processor and storage in the memory and the computer program that can run on the processor, when described When processor executes the computer program, infrared image and visible light image fusion method as described above are realized.
Optionally, as another embodiment of the invention, a kind of computer readable storage medium is described computer-readable Storage medium is stored with computer program, when the computer program is executed by processor, realizes infrared figure as described above Picture and visible light image fusion method.
As shown in figure 4, each label is expressed as in Fig. 4, the infrared cartoon ingredient in the source (A);(B) source infrared texture ingredient;(C) source Visible light cartoon ingredient;(D) source visible light texture ingredient;(F) difference cartoon ingredient;(E) differential texture ingredient;
It can clearly be seen that the cartoon ingredient after full Variational Decomposition has a rough profile information, and texture ingredient details Information is very distinct.Cartoon based on this, after having chosen the infrared decomposition with visible images in final weighted array herein With texture ingredient.In order to extract the infrared difference characteristic information between visible images, their difference diagram has been sought herein Picture, the difference contour edge letter between two source images of cartoon ingredient key reaction for scheming difference image it can be seen from (E) Breath, and these contour edges are very important information during image co-registration.So final weighting ingredient specifically includes that Infrared light image cartoon ingredient, infrared light image texture ingredient, visible images cartoon ingredient, visible images texture ingredient with And difference image cartoon ingredient, correspond respectively to w1、w2、w3、w4And w55 weight coefficients.
As shown in figure 5, each label is expressed as in Fig. 4, (a) visible images;(b) infrared light image;(c) NSCT method; (d) Shearlet method;(e) SR method;(f) TV variational multiscale analysis method;(g) fusion method proposed by the present invention.
From subjective vision, NSCT method is slightly better than Shearlet in edge holding due to its translation invariance Method;Although SR method can extract the spatial detail information in source images, personage's edges of regions in scene still compared with It is fuzzy, and contrast is lower;Variational multiscale analysis method is decomposed using variational multiscale, and is carried out using guiding filtering Texture information selection, the fusion results of acquisition have relatively high an edge conservation degree and contrast, texture information also relatively before Face several method is apparent;Context of methods obtains tool using the combining weights of each texture ingredient of wolf pack algorithm optimization and cartoon ingredient There are high quality contrast and edge details, variational multiscale analysis method side is slightly above in contrast and edge detail information Method, subjective vision effect are best.
Following table is evaluation index tables of data:
Objectivity index is introduced to various blending algorithm results to evaluate.Four kinds of common image co-registration performances are selected to refer to Mark the evaluation that several fusion method results are carried out with objective quality.Merging index is respectively information theory evaluation index QMI, the mankind view Feel sensitivity assessment index QCB, image structure similarity evaluation index QY, Gradient Features evaluation index QG.Mutual information evaluation index QMIFor measuring degree of correlation between two images, the present invention is used to measure in final fusion results comprising source image information amount Size;Graded index QGFor the infrared gradient information for being transmitted to final fusion results with visible images in the source of measuring;Structure phase Like degree evaluation index QYThe degree kept in terms of structural information for measuring fusion results;Visual sensitivity index QCBTake the overall situation Quality Map mean value.
Infrared source images, visible light source image and difference image are decomposed into cartoon component by Total Variation by the present invention Ingredient and texture component ingredient determine weight term and weight system by wolf pack Optimized Iterative algorithm from source images and components Number, is weighted combination for determining weight term and weight coefficient, to obtain final blending image as a result, fusion results are having Have and be also able to maintain complete profile information and detailed information while noise robustness, clarity and contrast are also relatively high.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of unit, only A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks On unit.It can select some or all of unit therein according to the actual needs to realize the mesh of the embodiment of the present invention 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.
It, can if integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product To be stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention substantially or Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products Out, which is stored in a storage medium, including some instructions are used so that a computer equipment (can be personal computer, server or the network equipment etc.) executes all or part of each embodiment method of the present invention Step.And storage medium above-mentioned include: USB flash disk, it is mobile hard disk, read-only memory (ROM, Read-Only Memory), random Access various Jie that can store program code such as memory (RAM, Random Access Memory), magnetic or disk Matter.
More than, only a specific embodiment of the invention, but scope of protection of the present invention is not limited thereto, and it is any to be familiar with Those skilled in the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or substitutions, These modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be wanted with right Subject to the protection scope asked.

Claims (9)

1. a kind of infrared image and visible light image fusion method, which comprises the steps of:
Difference Calculation is carried out to infrared image and visible images, obtains infrared and visible light difference image;
According to Total Variation respectively to the infrared image, the visible images and described infrared and visible light difference image Carry out decomposition computation, respectively obtain infrared image cartoon texture component ingredient, visible images cartoon texture component ingredient and Difference image cartoon texture component ingredient;
Construct the fitness function of wolf pack Optimized Iterative algorithm;
According to the wolf pack Optimized Iterative algorithm and the fitness function of building the infrared image cartoon texture component ingredient, In the visible images cartoon texture component ingredient and the difference image cartoon texture component ingredient determine weight term and Corresponding weight coefficient, as the weight term and weight coefficient of blending image components, the blending image is described infrared The image that image and the visible images combine;
It is weighted according to determining weight term and weight coefficient, obtains the blending image.
2. infrared image according to claim 1 and visible light image fusion method, which is characterized in that the basis becomes entirely Sub-model carries out decomposition computation to the infrared image, the visible images and described infrared and visible light difference image respectively Process include:
When carrying out decomposition computation to the infrared image, according to the full variational problem in Total Variation by the infrared image Is defined as:
Iinf=Tinf+Cinf,
Wherein, IinfIndicate infrared image, TinfIndicate infrared light image texture component ingredient, CinfIndicate infrared light image cartoon Components,
When carrying out decomposition computation to the visible images, according to the full variational problem in Total Variation by the visible light Image definition are as follows:
Ivis=Tvis+Cvis,
Wherein, IvisIndicate visible images, TvisIndicate visible light difference image texture component ingredient, CvisIndicate the visible equation of light Partial image cartoon components,
It, will according to the full variational problem in Total Variation when to the infrared progress decomposition computation with visible light difference image Described infrared and visible light difference image is defined as:
Idif=Tdif+Cdif,
Wherein, IdifIndicate infrared and visible light difference image, TdifIndicate infrared and visible light difference image texture component ingredient, CdifIndicate infrared and visible light difference image cartoon components;
The Total Variation is TV-l1Model, when carrying out decomposition computation to the infrared image, according to the TV-l1Model Calculating the corresponding minimum function of the infrared image, the corresponding minimum functional expression of the infrared image is expressed as the first formula, First formula are as follows:
Wherein, the solution of the first formula is infrared light image cartoon components, | ▽ Cinf| be expressed as infrared light image cartoon component at The full variational regularization item divided, λ | | Iinf-Cinf||1D Ω is expressed as fidelity term, and λ indicates regularization parameter,
When carrying out decomposition computation to the visible images, according to the TV-l1It is corresponding most that model calculates the infrared image Smallization function, the corresponding minimum functional expression of the visible images are expressed as the second formula, second formula are as follows:
Wherein, the solution of the second formula is visible images cartoon components, | ▽ Cvis| be expressed as visible images cartoon component at The full variational regularization item divided, λ | | Ivis-Cvis||1D Ω is expressed as fidelity term, and λ indicates regularization parameter,
When to the infrared progress decomposition computation with visible light difference image, according to the TV-l1Model calculates the infrared figure As corresponding minimum function, described infrared and visible light difference image minimizes functional expression and is expressed as third formula, the third Formula are as follows:
Wherein, the solution of third formula is infrared and visible images cartoon components, | ▽ Cdif| it is expressed as infrared and visible light figure As the full variational regularization item of cartoon components, λ | | Idif-Cdif||1D Ω is expressed as fidelity term, and λ indicates regularization parameter;
When carrying out decomposition computation to the infrared image, infrared light image texture component ingredient is counted according to the 4th formula It calculates, the 4th formula are as follows:
Tinf=Iinf-Cinf,
When carrying out decomposition computation to the visible images, visible images texture component ingredient is carried out according to the 5th formula It calculates, the 5th formula are as follows:
Tvis=Ivis-Cvis,
When to the infrared progress decomposition computation with visible light difference image, according to the 6th formula to infrared and visible light difference Image texture components are calculated, the 6th formula are as follows:
Tdif=Idif-Cdif
Functional expression, the minimum functional expression of visible images and red are minimized to infrared image respectively according to gradient descent method Outside with visible light difference image minimum functional expression optimization problem solving:
Wherein, (i, j) is indicated in the infrared light image or the visible images or described infrared and visible light difference image The position of pixel, parameter ▽+and ▽-respectively indicate forward with difference backward, ▽ CijIndicate gradient magnitude, n is iteration time Number, Δ m and Δ n be image lattice at a distance from, Δ t expression time variation amount,ε is set as minimum.
3. infrared image according to claim 1 and visible light image fusion method, which is characterized in that the building wolf pack The process of the fitness function of Optimized Iterative algorithm includes:
Assuming that blending image is IF, construct fitness function, the fitness function are as follows:
S=E (IF)*Std(IF)*Edge(IF),
Wherein, E (IF) indicate blending image entropy, Std (IF) indicate blending image standard deviation, Edge (IF) indicate fusion figure The edge conservation degree of picture;
The process for calculating the entropy includes:
Calculate the formula of the entropy are as follows:
Wherein, piIndicate the probability distribution of image pixel;
The process for calculating the standard deviation includes:
Calculate the formula of the standard deviation are as follows:
Wherein, M, N indicate picture size;
The process for calculating the edge conservation degree includes:
Edge strength and the direction that infrared image and visible images are calculated separately according to sobel boundary operator, calculate the side The formula of edge intensity are as follows:
Calculate the formula in the direction are as follows:
Wherein, i and j indicates direction, GiWith GjIt respectively indicates along i and j direction gradient;
Calculate the blending image of the hypothesis relative to the relative rim intensity of the infrared image and the visible images and Relative direction calculates the formula of the relative rim intensity are as follows:
Calculate the formula of the relative direction are as follows:
The conservation degree of the relative rim intensity and the conservation degree of the relative direction are calculated, calculates the relative rim intensity's The formula of conservation degree are as follows:
Calculate the formula of the conservation degree of the relative direction:
Define total edge conservation degree are as follows:
Calculate total edge information are as follows:
Wherein, Гσ、Гθ、Kσ、Kθ、δσAnd δθFor constant, Гσ=0.994, Гθ=0.9879, Kσ=-15, Kθ=-22, δσ= 0.5、δθ=0.8.
4. infrared image according to claim 3 and visible light image fusion method, which is characterized in that it is described will be determining Weight term includes: as the process of the weight term of picture content ingredient to be fused
Setting hunting ground is N × d theorem in Euclid space, and wherein N is wolf pack number, and d is dimension, defines maximum number of iterations Kmax, most Seeking number greatly is Tmax, dimension includes w1、w2、w3、w4And w55 weight coefficients;
The fitness function value that each wolf is calculated according to the fitness function, by the corresponding wolf of maximum adaptation degree functional value It in remaining fitness function, sets the corresponding wolf of maximum adaptation degree functional value to visit wolf in addition to the head wolf for head wolf, and It is iterated by seeking formula, it is described to seek formula are as follows:
Until the fitness function value for visiting wolf is greater than head wolf, stop iteration, or meet and seek number Tmax, stop iteration, Wherein, xidIndicating to visit wolf in d dimension space position, p is to visit wolf moving direction,Indicate that d dimension space seeks step-length;
The wolf in addition to the head wolf is randomly selected as fierce wolf, is calculated according to prey attack formula:
Wherein,For attack step-length,Indicate k+1 for head position,
If the position of head wolf is XL, fitness function value YLIf fierce wolf YiGreater than head wolf YL, enable Yi=YL, carry out call behavior; If fierce wolf YiLess than head wolf YL, continue attack, until dis≤dnear, export the fitness function value Y of each wolfL, in each of output The weight term and weight coefficient of picture content ingredient to be fused are determined as in a fitness function value, the weight term includes red Outer light image cartoon components Cinf, infrared light image texture component ingredient Tinf, visible images cartoon components Cvis、 Visible images texture component ingredient TvisWith infrared and visible light difference image cartoon components Cdif
5. infrared image according to claim 3 and visible light image fusion method, which is characterized in that it is described will be determining The process that weight term and weight coefficient are weighted includes:
If blending image is IF, it is calculated according to following weighted array formula:
IF=w1*Cinf+w2*Tinf+w3*Cvis+w4*Tvis+w5*Cdif,
Wherein, w1、w2、w3、w4And w5It is represented as weight coefficient, the weight coefficient value range is 0 to 1.
6. infrared image according to claim 1 and visible light image fusion method, which is characterized in that described to infrared figure The process that picture and visible images carry out Difference Calculation includes:
Difference Calculation, the difference formula are carried out to infrared image and visible images according to difference formula are as follows:
Idif=Iinf-Ivis,
Wherein, IdifIt is expressed as infrared and visible light difference image, IinfIt is expressed as infrared image, IvisIt is expressed as visible images.
7. a kind of infrared image and visual image fusion device characterized by comprising
Differential processing module obtains infrared and visible light difference for carrying out Difference Calculation to infrared image and visible images Image;
Decomposing module, for according to Total Variation respectively to the infrared image, the visible images and it is described it is infrared with Visible light difference image carries out decomposition computation, respectively obtains infrared image cartoon texture component ingredient, visible images cartoon line Manage components and difference image cartoon texture component ingredient;
Function constructs module, for constructing the fitness function of wolf pack Optimized Iterative algorithm;
Weight determination module, for according to the fitness function of the wolf pack Optimized Iterative algorithm and building in the infrared image Cartoon texture component ingredient, the visible images cartoon texture component ingredient and the difference image cartoon texture component at Weight term and weight coefficient corresponding with the weight term, weight term and power as blending image components are determined in point Weight coefficient, the blending image are the image that the infrared image and the visible images combine;
Fusion Module obtains described melt according to calculated result for being weighted the weight term determined and weight coefficient Close image.
8. a kind of infrared image and visual image fusion device, including memory, processor and it is stored in the memory In and the computer program that can run on the processor, which is characterized in that when the processor executes the computer journey When sequence, such as infrared image as claimed in any one of claims 1 to 6 and visible light image fusion method are realized.
9. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In, when the computer program is executed by processor, realize as infrared image as claimed in any one of claims 1 to 6 with can Light-exposed image interfusion method.
CN201910579632.0A 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium Active CN110349117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579632.0A CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579632.0A CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110349117A true CN110349117A (en) 2019-10-18
CN110349117B CN110349117B (en) 2023-02-28

Family

ID=68177318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579632.0A Active CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110349117B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223049A (en) * 2020-01-07 2020-06-02 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN111353966A (en) * 2020-03-03 2020-06-30 西华大学 Image fusion method based on total variation deep learning and application and system thereof
CN111680752A (en) * 2020-06-09 2020-09-18 重庆工商大学 Infrared and visible light image fusion method based on Framelet framework
CN113139893A (en) * 2020-01-20 2021-07-20 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN114143420A (en) * 2020-09-04 2022-03-04 聚晶半导体股份有限公司 Double-sensor camera system and privacy protection camera method thereof
CN116485694A (en) * 2023-04-25 2023-07-25 中国矿业大学 Infrared and visible light image fusion method and system based on variation principle
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117612093A (en) * 2023-11-27 2024-02-27 北京东青互联科技有限公司 Dynamic environment monitoring method, system, equipment and medium for data center

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
US20180227509A1 (en) * 2015-08-05 2018-08-09 Wuhan Guide Infrared Co., Ltd. Visible light image and infrared image fusion processing system and fusion method
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
冯鑫等: "基于变分多尺度的红外与可见光图像融合", 《电子学报》 *
常莉红: "一种基于特征分解的图像融合方法", 《浙江大学学报(理学版)》 *
沈瑜等: "基于Tetrolet变换的红外与可见光融合", 《光谱学与光谱分析》 *
邓苗等: "基于全变分的权值优化的多尺度变换图像融合", 《电子与信息学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223049B (en) * 2020-01-07 2021-10-22 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN111223049A (en) * 2020-01-07 2020-06-02 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN113139893A (en) * 2020-01-20 2021-07-20 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN113139893B (en) * 2020-01-20 2023-10-03 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN111353966B (en) * 2020-03-03 2024-02-09 南京一粹信息科技有限公司 Image fusion method based on total variation deep learning and application and system thereof
CN111353966A (en) * 2020-03-03 2020-06-30 西华大学 Image fusion method based on total variation deep learning and application and system thereof
CN111680752A (en) * 2020-06-09 2020-09-18 重庆工商大学 Infrared and visible light image fusion method based on Framelet framework
CN114143420A (en) * 2020-09-04 2022-03-04 聚晶半导体股份有限公司 Double-sensor camera system and privacy protection camera method thereof
CN116485694A (en) * 2023-04-25 2023-07-25 中国矿业大学 Infrared and visible light image fusion method and system based on variation principle
CN116485694B (en) * 2023-04-25 2023-11-07 中国矿业大学 Infrared and visible light image fusion method and system based on variation principle
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117218048B (en) * 2023-11-07 2024-03-08 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117612093A (en) * 2023-11-27 2024-02-27 北京东青互联科技有限公司 Dynamic environment monitoring method, system, equipment and medium for data center

Also Published As

Publication number Publication date
CN110349117B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN110349117A (en) A kind of infrared image and visible light image fusion method, device and storage medium
Elhoseny et al. Optimal bilateral filter and convolutional neural network based denoising method of medical image measurements
Cao et al. Thick cloud removal in Landsat images based on autoregression of Landsat time-series data
CN104899866B (en) A kind of intelligentized infrared small target detection method
CN106897986B (en) A kind of visible images based on multiscale analysis and far infrared image interfusion method
CN109493338A (en) Hyperspectral image abnormal detection method based on combined extracting sky spectrum signature
CN104751185B (en) SAR image change detection based on average drifting genetic cluster
CN109934178A (en) A kind of method for detecting infrared puniness target based on Kronecker base rarefaction representation
CN108921062B (en) Gait recognition method combining multiple gait features and cooperative dictionary
Zhang et al. Extraction of tree crowns damaged by Dendrolimus tabulaeformis Tsai et Liu via spectral-spatial classification using UAV-based hyperspectral images
Palandro et al. Detection of changes in coral reef communities using Landsat-5 TM and Landsat-7 ETM+ data
US9183671B2 (en) Method for accelerating Monte Carlo renders
CN114187214A (en) Infrared and visible light image fusion system and method
Quan et al. Visible and infrared image fusion based on curvelet transform
CN113610905A (en) Deep learning remote sensing image registration method based on subimage matching and application
Zhang et al. Classification method of CO2 hyperspectral remote sensing data based on neural network
Kartikeyan et al. Contextual techniques for classification of high and low resolution remote sensing data
CN112767267B (en) Image defogging method based on simulation polarization fog-carrying scene data set
CN109948571B (en) Optical remote sensing image ship detection method
Al Najar et al. A combined color and wave-based approach to satellite derived bathymetry using deep learning
CN113256733A (en) Camera spectral sensitivity reconstruction method based on confidence voting convolutional neural network
CN109377447B (en) Contourlet transformation image fusion method based on rhododendron search algorithm
CN115330876B (en) Target template graph matching and positioning method based on twin network and central position estimation
Azhar et al. A framework for multiscale intertidal sandflat mapping: A case study in the Whangateau estuary
Bloechl et al. A comparison of real and simulated airborne multisensor imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant