CN109447933B - The infrared and visible light image fusion method decomposed based on peculiar synchronizing information - Google Patents

The infrared and visible light image fusion method decomposed based on peculiar synchronizing information Download PDF

Info

Publication number
CN109447933B
CN109447933B CN201811351004.9A CN201811351004A CN109447933B CN 109447933 B CN109447933 B CN 109447933B CN 201811351004 A CN201811351004 A CN 201811351004A CN 109447933 B CN109447933 B CN 109447933B
Authority
CN
China
Prior art keywords
image
infrared
peculiar
fusion
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811351004.9A
Other languages
Chinese (zh)
Other versions
CN109447933A (en
Inventor
何贵青
纪佳琪
霍胤丞
王琪瑶
张琪琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwest University of Technology
Original Assignee
Northwest University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest University of Technology filed Critical Northwest University of Technology
Priority to CN201811351004.9A priority Critical patent/CN109447933B/en
Publication of CN109447933A publication Critical patent/CN109447933A/en
Application granted granted Critical
Publication of CN109447933B publication Critical patent/CN109447933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present invention proposes the infrared and visible light image fusion method decomposed based on peculiar synchronizing information, comprising the following steps: image preprocessing, the removing of image mean value, dictionary learning and building, the joint sparse decomposition of image, sparse coefficient fusion, mean value fusion and image reconstruction;The present invention is a kind of to the infrared Image Fusion for synchronizing orthogonal matching pursuit decomposition with visible images peculiar information by proposing in the case where joint sparse indicates model, amalgamation mode to original ground image block can further be refine to indicates that coefficient merges to all atoms for participating in this image block, improve the comprehensive and integrality of fusion, obtained blending image feature is obvious, information is comprehensive, i.e. the infrared peculiar information with visible images can not only be fused together by it, and blending image is integrally more clear, it is not in a certain piece of region distortion phenomenon.

Description

The infrared and visible light image fusion method decomposed based on peculiar synchronizing information
Technical field
The present invention relates to image co-registration fields, more particularly to the infrared and visible images decomposed based on peculiar synchronizing information Fusion method.
Background technique
Due to the image-forming principle difference of infrared sensor and visible light sensor, the image information that two kinds of sensors obtain has There is complementary and redundancy.Infrared imaging enough detects target that is hidden or deliberately pretending, and work free of discontinuities in 24 hours, but It is that its spatial resolution is poor;Visible images are imaged according to the reflective information of illumination, have certain spatial resolution, scene In most information can be high-visible, detailed information more horn of plenty
In recent years, existing image sparse indicates that model method receives significant attention, a small number of in the model redundant dictionary The linear superposition of atom indicates picture signal, and the nonzero coefficient of these atoms represents the main feature structure of image, at present It is widely used in image co-registration field, although this method has accomplished synchronous decomposition to multi-source image, it will not be more The redundancy of source images is removed, this is unfavorable for image co-registration, therefore on the basis of rarefaction representation, and joint sparse indicates model It is proposed the feature extraction for solving the problems, such as multi-source image, which defines the redundancy of multi-source image and complementary information respectively To share information and peculiar information, although joint sparse expression can preferably extract infrared and visible images characteristic informations, But its decomposable process is still and is solved according to OMP algorithm, fusion rule is still the processing of peculiar sparse coefficient vector, for The peculiar information to be merged, the two may take less than atom different in dictionary when carrying out sparse decomposition, be similarly to pair Source images are decomposed using different wavelet basis, so being substantially two different isolations, are resulted in and are difficult to carry out Coefficient tradeoff and fusion can not accomplish synchronous decomposition.Therefore, the present invention is proposed based on decomposes infrared of peculiar synchronizing information and can Light-exposed image interfusion method, to solve shortcoming in the prior art.
Summary of the invention
In view of the above-mentioned problems, the present invention is a kind of to infrared and visible light figure by proposing in the case where joint sparse indicates model The peculiar information of picture synchronizes the Image Fusion of orthogonal matching pursuit decomposition, can will melt to original ground image block Conjunction mode, which is further refine to, indicates that coefficient merges to all atoms for participating in this image block, improves the comprehensive of fusion Property and integrality, obtained blending image feature are obvious, and information is comprehensive, i.e., it can not only will be infrared peculiar with visible images Information is fused together, and blending image is integrally more clear, and is not in a certain piece of region distortion phenomenon.
The present invention proposes the infrared and visible light image fusion method decomposed based on peculiar synchronizing information, including following step It is rapid:
Step 1: image preprocessing
The infrared image for participating in fusion is labeled as Y1, the visible images for participating in fusion are labeled as Y2, then by infrared figure As Y1With visible images Y2P image block is divided by sliding window respectively, then obtains infrared image and visible images Data matrix X1And X2
Step 2: image mean value removing
To X1And X2Column vector remove its mean value respectively, the image data Y after going mean value1And Y2, then construct new connection Close data
Step 3: dictionary learning and building
Image data Y after going mean value1And Y2In randomly select sample, by the study of KSVD algorithm to redundant dictionary D, Construct the new dictionary under joint sparse expression
Step 4: the joint sparse of image is decomposed
According to Y=D α principle, to infrared image Y1With visible images Y2Peculiar information decomposed, obtain shared system Number αcAnd infrared and visible images peculiar coefficientsWith
Step 5: sparse coefficient fusion
T-th of sparse coefficient is merged according to formula (1), obtains fused sparse coefficient αf(t):
Wherein,Indicate i-th of element value of the peculiar sparse coefficient of infrared image t block, jiIt is infrared image With the same position element modulus value the largest source of visible images sparse coefficient,Be taken according to corresponding position modulus value it is big it The peculiar fusion coefficients of t block obtained afterwards;
Step 6: mean value fusion
T-th of mean value is merged according to formula (2), obtains fused image mean value m (t):
M (t)=τ m1(t)+(1-τ)m2(t)
τ=1/ (1+exp-β (| | Y1(t)||2-||Y2(t)||2)}) (2)
Wherein, Y1(t) and Y2It (t) is t-th of image data gone after mean value, τ is weight;
Step 7: image reconstruction
To sparse coefficient αf(t) it is reconstructed with image mean value m (t), then the data after reconstruct add up and ask flat Mean value obtains final blending image IF
Further improvement lies in that: the infrared image for participating in fusion is labeled as Y in the step 11, fusion will be participated in Visible images are labeled as Y2, then by infrared image Y1With visible images Y2Pass through respectivelyThe sliding window of size point At P image block, n is the dimension of dictionary atom, and each image block is straightened and successively lines up column vector again, obtain infrared image with can The data matrix X of light-exposed image1And X2, matrix size is n × P.
Further improvement lies in that: first to X in the step 21And X2Column vector remove its mean value m respectively1(t) and m2 (t), wherein t ∈ [1, P] indicates t-th of column vector, the image data Y after obtaining mean value1And Y2, construct new joint data
Further improvement lies in that: the detailed process of the step 3 are as follows: the removing removal X in above-mentioned steps two1And X2Column The image data Y that the mean value of vector obtains1And Y2In randomly select L sample composing training collection, by the study of KSVD algorithm to superfluous Then remaining dictionary D constructs the new dictionary under joint sparse expression
Further improvement lies in that: the specific calculating process of Y=D α principle of the step 4 are as follows: first assume infrared image Y1With Visible images Y2And redundant dictionary D, then the two is constituted by sharing sparse ingredient and peculiar sparse ingredient, such as formula (3) It is shown:
Illustrate K=1 in formula, 2, therefore YkFor infrared image Y1Or visible images Y2
Wherein, YcFor infrared image Y1With visible images Y2Shared part,It is its unique portion, αcWithRespectively For image YkShared sparse coefficient and peculiar sparse coefficient at dictionary D;
Formula (3) is indicated to obtain such as formula (4) in the matrix form:
It enablesSo formula (4) then can simplify are as follows: Y=Dα。
Further improvement lies in that: in the step 4 the problem of rarefaction representation coefficient α of the signal Y at dictionary D, it is sparse about Shown in beam model such as formula (5):
Wherein, formula (5) can be solved by any one in BP algorithm, MP algorithm or OMP algorithm.
Further improvement lies in that: the formula (5) is solved using OMP algorithm.
Further improvement lies in that: the joint sparse of image decomposes detailed process in the step 4 are as follows: first by image sequence It is converted into each fritter, then each piece is decomposed, it is assumed that allied signal to be decomposed is [y1,y2]T, then OMP is calculated Method improves, and improvement is divided into following three kinds of situations and is decomposed:
S1: it if this decomposes the atom of selection in shared atomic component, is normally decomposed according to OMP algorithm;
S2: if this decomposes the atom of selection in infrared atomic component, to y1In diOn decomposed i.e. solution formulaObtain a factor alpha, while solution formulaObtain another factor beta;
S3: if this decomposes the atom of selection in visible light atomic component, only to y when conventional OMP is decomposed2In diOn It is decomposed, similarly, y1In diOn also carry out decomposing it is primary.
Further improvement lies in that: β is a coefficient in the calculation formula (2) in the step 6, and the value of β is 0.01.
Further improvement lies in that: first to step 5 and the fused sparse coefficient α of step 6 in the step 7f(t) and Image mean value m (t) is reconstructed, and obtains t block fused data X (t), and finally X (t) can be obtained most by cumulative averaging Whole blending image IF, shown in fusion formula such as formula (6):
X (t)=D αf(t)+m(t)×E
E=[1,1 ..., 1]T,E∈Rn (6)
Wherein, E is the column vector that an element is all 1, and element number is equal to sliding block size n in step 1.
The invention has the benefit that a kind of to infrared and visible light figure by being proposed in the case where joint sparse indicates model The peculiar information of picture synchronizes the Image Fusion of orthogonal matching pursuit decomposition, can will melt to original ground image block Conjunction mode, which is further refine to, indicates that coefficient merges to all atoms for participating in this image block, improves the comprehensive of fusion Property and integrality, and by the atom in dictionary operated according to sample adaptive learning, image is improved in the original Intensity in minor structure, and the method for the present invention chooses " activity level " of the modulus value of atomic as guidance fusion, then Take big fusion rule that can maximumlly protect the atomic features in the infrared peculiar information with visible images by modulus value It stays, fusion coefficients remain the big atom of all active degrees, and the peculiar letter of more images is remained in fusion coefficients Breath, the blending image feature obtained in this way is obvious, and information is comprehensive, i.e., it can not only be by infrared and visible images peculiar information It is fused together, and blending image is integrally more clear, is not in a certain piece of region distortion phenomenon, it is normal compared at present Multiscale analysis method, sparse representation method, joint sparse representation method, the algorithm of the method for the present invention subjective vision with It objectively evaluates and achieves more preferably syncretizing effect in index.
Detailed description of the invention
Fig. 1 is that the peculiar information of infrared in the method for the present invention and visible light decomposes comparison diagram.
Fig. 2 is the dictionary model schematic that joint sparse of the present invention indicates.
Fig. 3 is the schematic diagram that the joint sparse of infrared image and visible images indicates in the method for the present invention.
Fig. 4 is sparse coefficient fusion rule schematic diagram of the present invention.
Specific embodiment
In order to realize invention technological means, reach purpose and effect is easy to understand, below with reference to specific implementation Mode, the present invention is further explained.
According to Fig. 1,2,3,4, the present embodiment proposes the infrared and visible light figure decomposed based on peculiar synchronizing information As fusion method, comprising the following steps:
Step 1: image preprocessing
The infrared image for participating in fusion is labeled as Y1, the visible images for participating in fusion are labeled as Y2, then by infrared figure As Y1With visible images Y2Pass through respectivelyThe sliding window of size is divided into P image block, and n is the dimension of dictionary atom Number, is straightened each image block and successively lines up column vector again, obtain the data matrix X of infrared image and visible images1And X2, square Battle array size is n × P;
Step 2: image mean value removing
First to X1And X2Column vector remove its mean value m respectively1(t) and m2(t), wherein t ∈ [1, P] indicate to arrange for t-th to Amount, the image data Y after obtaining mean value1And Y2, construct new joint data
Step 3: dictionary learning and building
The removing removal X in above-mentioned steps two1And X2Column vector the obtained image data Y of mean value1And Y2In select at random L sample composing training collection is taken, by the study of KSVD algorithm to redundant dictionary D, then constructs the new word under joint sparse expression Allusion quotation
Step 4: the joint sparse of image is decomposed
First assume infrared image Y1With visible images Y2And redundant dictionary D, then the two is by sharing sparse ingredient and spy There is sparse ingredient to constitute, as shown in formula (3):
Illustrate K=1 in formula, 2, therefore YkFor infrared image Y1Or visible images Y2
Wherein, YcFor infrared image Y1With visible images Y2Shared part,It is its unique portion, αcWithRespectively For image YkShared sparse coefficient and peculiar sparse coefficient at dictionary D;
Formula (3) is indicated to obtain such as formula (4) in the matrix form:
It enablesSo formula (4) then can simplify are as follows: Y=D α, signal Y exist The problem of rarefaction representation coefficient α under dictionary D, shown in sparse constraint model such as formula (5):
Wherein, formula (5) can be solved by OMP algorithm, further according to Y=Dα principle, to infrared image Y1With can Light-exposed image Y2Peculiar information decomposed, obtain shared factor alphacAnd infrared and visible images peculiar coefficientsWithThe joint sparse of image decomposes detailed process are as follows: first converts each fritter for image sequence, then carries out to each piece It decomposes, it is assumed that allied signal to be decomposed is [y1,y2]T, then OMP algorithm is improved, improvement be divided into following three kinds of situations into Row decomposes:
S1: it if this decomposes the atom of selection in shared atomic component, is normally decomposed according to OMP algorithm;
S2: if this decomposes the atom of selection in infrared atomic component, to y1In diOn decomposed i.e. solution formulaObtain a factor alpha, while solution formulaObtain another factor beta;
S3: if this decomposes the atom of selection in visible light atomic component, only to y when conventional OMP is decomposed2In diOn It is decomposed, similarly, y1In diOn also carry out decomposing it is primary;
Step 5: sparse coefficient fusion
T-th of sparse coefficient is merged according to formula (1), obtains fused sparse coefficient αf(t):
Wherein,Indicate i-th of element value of the peculiar sparse coefficient of infrared image t block, jiIt is infrared image With the same position element modulus value the largest source of visible images sparse coefficient,Be taken according to corresponding position modulus value it is big it The peculiar fusion coefficients of t block obtained afterwards;
Step 6: mean value fusion
T-th of mean value is merged according to formula (2), obtains fused image mean value m (t):
M (t)=τ m1(t)+(1-τ)m2(t)
τ=1/ (1+exp {-β (Y1(t)2-||Y2(t)||2)}) (2)
Wherein, Y1(t) and Y2It (t) is t-th of image data gone after mean value, τ is weight;
Step 7: image reconstruction
First to step 5 and the fused sparse coefficient α of step 6f(t) it is reconstructed with image mean value m (t), obtains t Finally final blending image I can be obtained by cumulative averaging to X (t) in block fused data X (t)F, fusion formula such as public affairs Shown in formula (6):
X (t)=D αf(t)+m(t)×E
E=[1,1 ..., 1]T,E∈Rn (6)
Wherein, E is the column vector that an element is all 1, and element number is equal to sliding block size n in step 1.
In order to which quantitative assessment difference fusion method is for infrared and visual image fusion performance, use is currently used Evaluation index Q based on human visual systemAB/FAnd Piella index (Q0、QW、QE) to the average behavior index of fusion method It is tested, obtains table one, this four indexs are higher closer to 1 image co-registration quality between [0,1], QAB/FEvaluation The quality of marginal information conservation degree, Q0The whole similitude for reflecting blending image and source images, which is that a comparison is comprehensive, to be commented Valence index, QWIt is to the image co-registration quality after window weight, QEIt is that another evaluated the edge of blending image refers to Mark.
The average behavior index of one: 6 kind of fusion method of table
As can be seen from Table I, the DWT method that directly merges relative to conventional transformation domain, two kinds of rarefaction representation method SR- OMP, SR-SOMP and two kinds of joint sparse representation JSR1 and JSR2, the method for the present invention JSR-SOMP can preferably retain Background information and the target information in infrared image, evaluation index in visible images have excellent in several parameters Elegant performance, this also illustrates the validity of the method for the present invention.
It is a kind of same to the infrared peculiar information progress with visible images by being proposed in the case where joint sparse indicates model The Image Fusion that orthogonal matching pursuit decomposes is walked, the amalgamation mode to original ground image block can further be refine to Coefficient, which merges, to be indicated to all atoms for participating in this image block, improves the comprehensive and integrality of fusion, and lead to The atom crossed in dictionary is operated according to sample adaptive learning, improve image in the atomic structure on intensity, And the method for the present invention chooses " activity level " of the modulus value of atomic as guidance fusion, then big fusion is taken by modulus value Rule can maximumlly remain the atomic features in the infrared peculiar information with visible images, and fusion coefficients are protected The atom for having stayed all active degrees big remains the peculiar information of more images in fusion coefficients, the fusion figure obtained in this way Picture feature is obvious, and information is comprehensive, i.e., the infrared peculiar information with visible images can not only be fused together by it, and merges Image is integrally more clear, and is not in a certain piece of region distortion phenomenon, compared to currently used multiscale analysis method, Sparse representation method, joint sparse representation method, the algorithm of the method for the present invention are taken in index in subjective vision with objectively evaluating Obtained more preferably syncretizing effect.
The basic principles, main features and advantages of the invention have been shown and described above.The technical staff of the industry should Understand, the present invention is not limited to the above embodiments, and the above embodiments and description only describe originals of the invention Reason, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes and improvements It all fall within the protetion scope of the claimed invention.The claimed scope of the invention is by appended claims and its equivalent circle It is fixed.

Claims (10)

1. the infrared and visible light image fusion method decomposed based on peculiar synchronizing information, it is characterised in that: the following steps are included:
Step 1: image preprocessing
The infrared image for participating in fusion is labeled as Y1, the visible images for participating in fusion are labeled as Y2, then by infrared image Y1 With visible images Y2P image block is divided by sliding window respectively, then obtains the number of infrared image and visible images According to matrix X1And X2
Step 2: image mean value removing
To X1And X2Column vector remove its mean value respectively, the image data Y after going mean value1And Y2, then construct new joint number According to
Step 3: dictionary learning and building
Image data Y after going mean value1And Y2In randomly select sample, pass through the study of KSVD algorithm to redundant dictionary D, building New dictionary under joint sparse expression
Step 4: the joint sparse of image is decomposed
According to Y=Dα principle, to infrared image Y1With visible images Y2Peculiar information decomposed, obtain shared factor alphac And infrared and visible images peculiar coefficientsWith
Step 5: sparse coefficient fusion
T-th of sparse coefficient is merged according to formula (1), obtains fused sparse coefficient αf(t):
Wherein,Indicate i-th of element value of the peculiar sparse coefficient of infrared image t block, jiInfrared image with can The same position element modulus value the largest source of light-exposed image sparse coefficient,It is to be obtained after being taken according to corresponding position modulus value greatly The peculiar fusion coefficients of t block arrived;
Step 6: mean value fusion
T-th of mean value is merged according to formula (2), obtains fused image mean value m (t):
M (t)=τ m1(t)+(1-τ)m2(t)
τ=1/ (1+exp-β (| | Y1(t)||2-||Y2(t)||2)}) (2)
Wherein, Y1(t) and Y2It (t) is t-th of image data gone after mean value, τ is weight;
Step 7: image reconstruction
To sparse coefficient αf(t) it is reconstructed with image mean value m (t), then the data after reconstruct add up and be averaged, Obtain final blending image IF
2. the infrared and visible light image fusion method according to claim 1 decomposed based on peculiar synchronizing information, special Sign is: the infrared image for participating in fusion being labeled as Y in the step 11, the visible images for participating in fusion are labeled as Y2, then by infrared image Y1With visible images Y2Pass through respectivelyThe sliding window of size is divided into P image block, and n is The dimension of dictionary atom is straightened each image block and successively lines up column vector again, obtains the data of infrared image and visible images Matrix X1And X2, matrix size is n × P.
3. the infrared and visible light image fusion method according to claim 1 decomposed based on peculiar synchronizing information, special Sign is: first to X in the step 21And X2Column vector remove its mean value m respectively1(t) and m2(t), wherein t ∈ [1, P] table Show t-th of column vector, the image data Y after obtaining mean value1And Y2, construct new joint data
4. the infrared and visible light image fusion method according to claim 1 decomposed based on peculiar synchronizing information, special Sign is: the detailed process of the step 3 are as follows: the removing removal X in above-mentioned steps two1And X2The mean value of column vector obtain Image data Y1And Y2In randomly select L sample composing training collection, by KSVD algorithm learn to redundant dictionary D, then construct New dictionary under joint sparse expression
5. the infrared and visible light image fusion method according to claim 1 decomposed based on peculiar synchronizing information, special Sign is: the Y=of the step 4DThe specific calculating process of α principle are as follows: first assume infrared image Y1With visible images Y2And Redundant dictionary D, then the two is constituted by sharing sparse ingredient and peculiar sparse ingredient, as shown in formula (3):
Illustrate K=1 in formula, 2, therefore YkFor infrared image Y1Or visible images Y2
Wherein, YcFor infrared image Y1With visible images Y2Shared part,It is its unique portion, αcWithRespectively scheme As YkShared sparse coefficient and peculiar sparse coefficient at dictionary D;
Formula (3) is indicated to obtain such as formula (4) in the matrix form:
It enablesSo formula (4) then can simplify are as follows: Y=Dα。
6. the infrared and visible light image fusion method according to claim 1 decomposed based on peculiar synchronizing information, special Sign is: signal Y is in dictionary in the step 4DUnder rarefaction representation coefficient α the problem of, sparse constraint model such as formula (5) It is shown:
Wherein, formula (5) can be solved by any one in BP algorithm, MP algorithm or OMP algorithm.
7. the infrared and visible light image fusion method according to claim 6 decomposed based on peculiar synchronizing information, special Sign is: the formula (5) is solved using OMP algorithm.
8. the infrared and visible light image fusion method according to claim 1 decomposed based on peculiar synchronizing information, special Sign is: the joint sparse of image decomposes detailed process in the step 4 are as follows: each fritter first is converted by image sequence, Then each piece is decomposed, it is assumed that allied signal to be decomposed is [y1,y2]T, then OMP algorithm is improved, improves and divides It is decomposed for following three kinds of situations:
S1: it if this decomposes the atom of selection in shared atomic component, is normally decomposed according to OMP algorithm;
S2: if this decomposes the atom of selection in infrared atomic component, to y1In diOn decomposed i.e. solution formulaObtain a factor alpha, while solution formulaObtain another factor beta;
S3: if this decomposes the atom of selection in visible light atomic component, only to y when conventional OMP is decomposed2In diUpper progress It decomposes, similarly, y1In diOn also carry out decomposing it is primary.
9. the infrared and visible light image fusion method according to claim 1 decomposed based on peculiar synchronizing information, special Sign is: β is a coefficient in the calculation formula (2) in the step 6, and the value of β is 0.01.
10. the infrared and visible light image fusion method according to claim 1 decomposed based on peculiar synchronizing information, special Sign is: first to step 5 and the fused sparse coefficient α of step 6 in the step 7f(t) it is carried out with image mean value m (t) Reconstruct, obtains t block fused data X (t), and final blending image I finally can be obtained by cumulative averaging to X (t)F, Shown in fusion formula such as formula (6):
X (t)=D αf(t)+m(t)×E
E=[1,1 ..., 1]T,E∈Rn (6)
Wherein, E is the column vector that an element is all 1, and element number is equal to sliding block size n in step 1.
CN201811351004.9A 2018-11-14 2018-11-14 The infrared and visible light image fusion method decomposed based on peculiar synchronizing information Active CN109447933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811351004.9A CN109447933B (en) 2018-11-14 2018-11-14 The infrared and visible light image fusion method decomposed based on peculiar synchronizing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811351004.9A CN109447933B (en) 2018-11-14 2018-11-14 The infrared and visible light image fusion method decomposed based on peculiar synchronizing information

Publications (2)

Publication Number Publication Date
CN109447933A CN109447933A (en) 2019-03-08
CN109447933B true CN109447933B (en) 2019-10-22

Family

ID=65552259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811351004.9A Active CN109447933B (en) 2018-11-14 2018-11-14 The infrared and visible light image fusion method decomposed based on peculiar synchronizing information

Country Status (1)

Country Link
CN (1) CN109447933B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047059B (en) * 2019-04-10 2021-07-09 北京旷视科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111626939A (en) * 2020-06-06 2020-09-04 徐州飞梦电子科技有限公司 Infrared and visible light image fusion method based on sparse expression and Piella index optimization and containing noise image fusion
CN117218048B (en) * 2023-11-07 2024-03-08 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021537A (en) * 2014-06-23 2014-09-03 西北工业大学 Infrared and visible image fusion method based on sparse representation
CN106981058A (en) * 2017-03-29 2017-07-25 武汉大学 A kind of optics based on sparse dictionary and infrared image fusion method and system
CN107730482A (en) * 2017-09-28 2018-02-23 电子科技大学 A kind of sparse blending algorithm based on region energy and variance
CN108122219A (en) * 2017-11-30 2018-06-05 西北工业大学 Infrared and visible light image fusion method based on joint sparse and non-negative sparse

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683066A (en) * 2017-01-13 2017-05-17 西华大学 Image fusion method based on joint sparse model
CN107341786B (en) * 2017-06-20 2019-09-24 西北工业大学 The infrared and visible light image fusion method that wavelet transformation and joint sparse indicate
CN107657217B (en) * 2017-09-12 2021-04-30 电子科技大学 Infrared and visible light video fusion method based on moving target detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021537A (en) * 2014-06-23 2014-09-03 西北工业大学 Infrared and visible image fusion method based on sparse representation
CN106981058A (en) * 2017-03-29 2017-07-25 武汉大学 A kind of optics based on sparse dictionary and infrared image fusion method and system
CN107730482A (en) * 2017-09-28 2018-02-23 电子科技大学 A kind of sparse blending algorithm based on region energy and variance
CN108122219A (en) * 2017-11-30 2018-06-05 西北工业大学 Infrared and visible light image fusion method based on joint sparse and non-negative sparse

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《自适应字典学习的多聚焦图像融合》;严春满;《中国图象图形学报》;20120916;第17卷(第9期);第1144-1149页 *
《面向空间目标识别的红外与可见光图像融合算法及仿真研究》;陈永胜;《上海交通大学硕士学位论文》;20120201;第9-83页 *

Also Published As

Publication number Publication date
CN109447933A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
Chai et al. Multifocus image fusion scheme using focused region detection and multiresolution
CN109447933B (en) The infrared and visible light image fusion method decomposed based on peculiar synchronizing information
Moreton et al. Investigation into the use of photoanthropometry in facial image comparison
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
CN109859233A (en) The training method and system of image procossing, image processing model
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN109447909A (en) The infrared and visible light image fusion method and system of view-based access control model conspicuousness
CN113420826A (en) Liver focus image processing system and image processing method
CN106683066A (en) Image fusion method based on joint sparse model
Florkow et al. The impact of MRI-CT registration errors on deep learning-based synthetic CT generation
CN106327479A (en) Apparatus and method for identifying blood vessels in angiography-assisted congenital heart disease operation
Wu et al. Auto-contouring via automatic anatomy recognition of organs at risk in head and neck cancer on CT images
CN106204601A (en) A kind of live body parallel method for registering of EO-1 hyperion sequence image based on wave band scanning form
CN106504221B (en) Method of Medical Image Fusion based on quaternion wavelet transformation context mechanism
Li et al. Reference-based multi-level features fusion deblurring network for optical remote sensing images
Cao et al. Use of deep learning in forensic sex estimation of virtual pelvic models from the Han population
Danesh et al. Synthetic OCT data in challenging conditions: three-dimensional OCT and presence of abnormalities
CN113269774B (en) Parkinson disease classification and lesion region labeling method of MRI (magnetic resonance imaging) image
El-Shafai et al. Improving traditional method used for medical image fusion by deep learning approach-based convolution neural network
Lange et al. Computer-aided-diagnosis (CAD) for colposcopy
Tuan et al. The improved faster r-cnn for detecting small facial landmarks on vietnamese human face based on clinical diagnosis
CN116612056A (en) Image data fusion algorithm based on attention mechanism and Boosting model integrated training strategy
CN116152235A (en) Cross-modal synthesis method for medical image from CT (computed tomography) to PET (positron emission tomography) of lung cancer
Hsia et al. A 3D endoscopic imaging system with content-adaptive filtering and hierarchical similarity analysis
Liu et al. An efficient residual-based method for railway image dehazing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant