CN107689038A - A kind of image interfusion method based on rarefaction representation and circulation guiding filtering - Google Patents

A kind of image interfusion method based on rarefaction representation and circulation guiding filtering Download PDF

Info

Publication number
CN107689038A
CN107689038A CN201710724551.6A CN201710724551A CN107689038A CN 107689038 A CN107689038 A CN 107689038A CN 201710724551 A CN201710724551 A CN 201710724551A CN 107689038 A CN107689038 A CN 107689038A
Authority
CN
China
Prior art keywords
image
fusion
smoothed
images
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710724551.6A
Other languages
Chinese (zh)
Inventor
张萍
王璟璟
袁雨辰
田明
费春
王晓玮
夏清
吴江
刘婧雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710724551.6A priority Critical patent/CN107689038A/en
Publication of CN107689038A publication Critical patent/CN107689038A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A kind of image interfusion method based on rarefaction representation and circulation guiding filtering, belongs to image processing field.Source images are decomposed into smoothed image and detail pictures by the present invention, and rarefaction representation and circulation guiding filtering fusion smoothed image and detail pictures is then respectively adopted, is finally added the smoothed image after fusion with detail pictures to obtain fused images.Rarefaction representation has preferable syncretizing effect to the smoothed data of low-rank, and the edge and profile of detail data, the valid data of prominent detail pictures can be retained by circulating guiding filtering so that of the invention compared with traditional fusion method, syncretizing effect is obvious, and picture appraisal parameter is higher.

Description

A kind of image interfusion method based on rarefaction representation and circulation guiding filtering
Technical field
The invention belongs to image processing field, relates generally to image fusion technology, specifically discloses one kind and is based on sparse table Show (Sparse Representation, SR) and circulate the image of guiding filtering (Rolling Guidance Filter, RGF) Fusion method.
Background technology
Image co-registration is that two width or multiple image that are shot to same target are synthesized to the process of piece image, after processing It can obtain having that the advantages of source images, information are more accurate, are more suitable for human-eye visual characteristic or are adapted to the new figure of computer disposal Picture.Image to be fused each possesses different features and prominent information, therefore is melted some images by certain algorithm Close, the target information of obtained fused images is more accurate comprehensive, the more conducively analysis and research of view data.
With the continuous development of science and technology, the species and application field of imaging sensor constantly expand, different sensors into As principle difference, the feature of image of acquisition is also just different, and the view data for causing to obtain both has redundancy, exists again mutual Benefit property.Relatively reliable result can be obtained using redundancy, and improves signal to noise ratio;It can tie fusion using complementary information Fruit, which includes, more enriches comprehensive detailed information.Therefore, image fusion technology is exactly the same field got to different sensors Scape or multiple source images of target are merged according to actual demand, are allowed to the processing for being adapted to visually-perceptible and computer.
Image co-registration is divided into 3 processing levels, is Pixel-level fusion, feature-based fusion and decision level fusion respectively.Pixel The raw information of original image has directly been merged in level fusion, can keep the trickle information content of image well, and fusion purpose is Multiple input pictures are fused to an image, it is more beneficial for the mankind or machine perception compared to any input picture; These advantages cause it to have important application value in remote sensing, medical imaging and night vision application.Image interfusion method includes The conversion of three Main Stages, i.e. image, the fusion of conversion coefficient, inverse transformation.Varied one's tactics, image can be melted based on used Conjunction method is divided into four classes:(1) method based on multi-resolution decomposition;(2) method based on rarefaction representation;(3) directly to image slices The method that element or other transform domains are merged, such as bulk composition space or intensity form and aspect saturation color space;(4) combine a variety of The method of conversion.In existing fusion method, the average calculating fusion amount of linear weighted function is small, time-consuming short, but can not Retain the information of source images well;Wavelet transform (Discrete Wavelet Transform, DWT) is by data conversion For frequency domain, it is easy to calculate, but lacking direction property;Contourlet transformation can catch the inherent geometry of image, can be very well Ground handles 2D signal, but does not have translation invariance, it is impossible to represents complicated space structure.In recent years, edge preserving filter It is widely used on image procossing, typical boundary filter has two-sided filter, wave filter etc., and it can be exactly Grain details, medium scale edge and the large-scale space structure of the small yardstick of separate picture, therefore in image smoothing, denoising Etc. have good effect.
The content of the invention
The present invention proposes a kind of based on rarefaction representation (Sparse Representation, SR) and circulation guiding filtering The image interfusion method of (Rolling Guidance Filter, RGF), this method maintain image while smooth background Edge and profile, remain the detailed information of image well.
Technical scheme is as follows:
A kind of image interfusion method based on rarefaction representation and circulation guiding filtering, comprises the following steps:
Step 1, source images are decomposed into by smoothed image and detail pictures using mean filter;
Step 2, the smoothed image for obtaining step 1 are using the method fusion based on rarefaction representation:
The smoothed image that step 1 is obtained first resolves into a series of images block, and each image block is converted into column vector And subtracting its column vector average makes average value be classified as 0, pass through OMP (Orthogonal Matching Pursuit, orthogonal matching Tracking) the obtained vectorial sparse coefficient of Algorithm for Solving, then by " max-L1 " rule fusion sparse coefficient, after fusion The image block merged after sparse coefficient inverse transformation, according to the smooth figure after being merged after the position grouping of different images block Picture;
Step 3, the detail pictures for obtaining step 1 use the method fusion based on circulation guiding filtering:
Pending image is built using DoG operators to source images, and circulation guiding filtering is carried out to pending image;Then Weight coefficient is obtained through normalization, the details after obtained weight coefficient is merged with corresponding detail pictures weighting summation Image;
Step 4, step 2 is merged after smoothed image merged with step 3 after detail pictures be added after, obtain fusion figure Picture.
A kind of image interfusion method based on rarefaction representation and circulation guiding filtering, specifically includes following steps:
The decomposition of step 1, source images:
Source images are decomposed into by smoothed image and detail pictures using mean filter, are specially:
Pn=In*Lave
Dn=In-Pn
Wherein, InFor the n-th width source images, LaveFor mean filter, PnFor the smooth figure corresponding with the n-th width source images Picture, DnFor the detail pictures corresponding with the n-th width source images, n=1,2 ..., N;
The fusion of step 2, smoothed image
2.1 use sliding window technique, according to the order from the upper left corner to the lower right corner by PnIt is decomposed intoSize Image block, set PnResolve into T image blockWherein, l is study dictionary D width;
2.2 for each image blockIt is converted into column vectorThen column vector is subtractedThe average of middle element Its average value is classified as 0, obtain
Wherein, 1 be the 1 × l for being all 1 column vector,It is correspondingAll elements average;
2.3 use what OMP algorithms calculation procedure 2.2 obtainedSparse coefficientObject function is as follows:
Wherein, D is study dictionary, and ε is serious forgiveness;
2.4 repeat step 2.1-2.3 method, obtain correspondence image block in N width smoothed imagesSparse coefficientThen using the sparse coefficient of " max-L1 " rule fusion N width smoothed images, obtain corresponding i-th in N width smoothed images The fusion sparse coefficient of individual image block:
Wherein,The sparse coefficient being calculated for step 2.3,For obtained fusion sparse coefficient;
2.5 according to the fusion sparse coefficient that i-th of image block is corresponded in obtained N width smoothed imagesInverse transformation obtains it Merge column vector
Wherein, 1 be the 1 × l for being all 1 column vector,For corresponding column vectorAverage;
2.6 are directed to T image blockIt can obtain TBy eachRemold as image blockThen according to not Position with image block is by TReconfigure, the smoothed image P after being mergedF
The fusion of step 3, detail pictures
3.1 are directed to each width source images, and pending image S is built using DoG operatorsn
Sn=abs (In*Gσ1-In*Gσ2)
Wherein, G is Gaussian filter, and σ 1, σ 2 are the Gaussian kernel of various criterion difference, SnFor n-th pending image, n= 1,2,…,N;
3.2 are directed to n-th pending image SnIt is compared to each other to obtain characteristic image Pn, it is specially:
Wherein,The value of pixel k in n-th pending image is represented,Represent pixel k in the n-th width characteristic image Value, k=1,2 ..., K, K is pending image SnWith characteristic image PnIn pixel total number, PnIt is pending for n-th Characteristic image corresponding to image, n=1,2 ..., N, N represent the quantity of source images;
The characteristic image P that 3.3 pairs of steps 3.2 obtainnCirculation guiding filtering is carried out, and is normalized:
Rn=RGD (Pn)
Wherein, RenFor the weight map picture finally obtained, RGF represents circulation guiding filtering operation;
3.4 are directed to N width source images, repeat 3.1-3.3 process, obtain N number of weight map pictureThen will obtain Weight map merge as corresponding detail pictures, the detail pictures after being merged:
Smoothed image P after step 4, the fusion for obtaining step 2FDetail pictures D after the fusion obtained with step 3FPhase Fused images are obtained after adding:
F=PF+DF
Beneficial effects of the present invention are:
Source images are decomposed into smoothed image and detail pictures by the present invention, and rarefaction representation is then respectively adopted and circulation guides Smoothed image after fusion, is finally added to obtain fused images by filtering fusion smoothed image and detail pictures with detail pictures. Rarefaction representation has preferable syncretizing effect to the smoothed data of low-rank, and the side of detail data can be retained by circulating guiding filtering Edge and profile, the valid data of prominent detail pictures so that it is of the invention that syncretizing effect is obvious compared with traditional fusion method, Picture appraisal parameter is higher.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of image interfusion method of the present invention based on rarefaction representation and circulation guiding filtering;
Fig. 2 is source images decomposable process schematic diagram of the present invention;
Fig. 3 is the fusion process schematic diagram of smoothed image in the present invention;
Fig. 4 is the fusion process schematic diagram of detail pictures in the present invention;
Fig. 5 is multi-focus image fusion schematic diagram;Wherein, (a) is rear focused view, and (b) is front focused view, and (c) is Image after fusion;
Fig. 6 is Medical image fusion schematic diagram;Wherein, (a) is CT images, and (b) is MR images, and (c) is the figure after fusion Picture;
Fig. 7 is that visible ray-infrared image merges schematic diagram;Wherein, (a) is visible images, and (b) is infrared image, (c) For the image after fusion;
Embodiment
With reference to the accompanying drawings and examples, technical scheme is described in detail.
A kind of image interfusion method based on rarefaction representation and circulation guiding filtering, specifically includes following steps:
The decomposition of step 1, source images:
N source images to be fused, it is to be clapped under synchronization by the sensor of different imaging characteristicses for same target Take the photograph what is obtained.Filtered for each width source images in N width source images using mean filter, the result for filtering to obtain is smooth What image, source images and smoothed image subtracted each other to obtain is detail pictures.Specially:
Pn=In*Lave
Dn=In-Pn
Wherein, InFor the n-th width source images, LaveFor mean filter, PnFor the smooth figure corresponding with the n-th width source images Picture, DnFor the detail pictures corresponding with the n-th width source images, n=1,2 ..., N.
The fusion of step 2, smoothed image
2.1 use sliding window technique, according to the order from the upper left corner to the lower right corner by PnIt is decomposed intoSize Image block, set PnResolve into T image blockWherein, l is study dictionary D width, sets the step-length of sliding window For s pixel, s is less than
2.2 for each image blockIt is converted into column vectorThen column vector is subtractedThe average of middle element makes its average value be classified as 0, obtains
Wherein, 1 be the 1 × l for being all 1 column vector,It is correspondingAll elements average;
2.3 are obtained using OMP (Orthogonal Matching Pursuit, orthogonal matching pursuit) algorithm calculation procedure 2.2 ArriveSparse coefficientObject function is as follows:
Wherein, D is study dictionary, and ε is serious forgiveness;
2.4 repeat step 2.1-2.3 method, obtain corresponding image block in N width smoothed imagesSparse coefficientThen using the sparse coefficient of " max-L1 " rule fusion N width smoothed images, obtain corresponding i-th in N width smoothed images The fusion sparse coefficient of individual image block:
Wherein,The sparse coefficient being calculated for step 2.3,For obtained fusion sparse coefficient;
2.5 according to the fusion sparse coefficient that i-th of image block is corresponded in obtained N width smoothed imagesInverse transformation obtains it Merge column vector
Wherein, 1 be the 1 × l for being all 1 column vector,For corresponding column vectorAverage;
2.6 are directed to all image blocksIt can obtain TBy eachRemold as image blockThen root According to the position of different images block by TReconfigure, the smoothed image P after being mergedF;Wherein, when different images block has When overlapping, each pixel divided by overlapping number are averaged in overlapping region.
The fusion of step 3, detail pictures
3.1 are directed to each width source images, and pending image S is built using DoG (difference of Gaussian) operatorn
Sn=abs (In*Gσ1-In*Gσ2)
Wherein, G is Gaussian filter, and σ 1, σ 2 are the Gaussian kernel of various criterion difference, reference value σ 1 used herein, σ 2 Respectively 1 and 0.3, SnFor n-th pending image, n=1,2 ..., N;
3.2 are directed to n-th pending image SnIt is compared to each other to obtain characteristic image Pn, it is specially:
Wherein,The value of pixel k in n-th pending image is represented,Represent pixel in the 1st pending image Point k value,The value of pixel k in the n-th width characteristic image, k=1,2 ..., K are represented, K is pending image SnAnd characteristic pattern As PnIn pixel total number, PnFor characteristic image corresponding to n-th pending image, n=1,2 ..., N, N represents source figure The quantity of picture;
It is compared for the size of the value of the pixel of same position in N pending image, is obtained according to above-mentioned formula To characteristic image P corresponding to n-th pending imagen
3.3 circulation guiding filterings have the characteristics of eliminating fine structure and keeping edge simultaneously, by being changed to object use The two-sided filter in generation completes filtering.The realization of present invention circulation guiding filtering is realized by the iteration of wave filter , the process for circulating guiding filtering is expressed from the next:
Wherein,
PnCharacteristic image corresponding to the n-th pending image obtained for step 3.2, Pn(m) n-th pending figure is represented Pixel m value, H in the characteristic image as corresponding totFor the image of the t times iteration, Ht(k) in the image for representing the t times iteration Pixel k value, Ht+1(k) value of the pixel k in the image of the t+1 times iteration, H are representedt(m) figure of the t times iteration is represented The value of pixel m as in;M (k) represents pixel k neighborhood, and m represents the point in pixel k fields, σs、σrTo need to set Parameter, representation space weight and distance weighting, are respectively 3 and 0.05 herein with reference to value respectively;Wherein, initial iterative image H0It is arranged to and source images size identical full 0 matrix;
The value (k=1,2 ..., K) of all pixels point in the image of the t+1 times iteration is calculated according to above formula, and then is obtained Image after the t+1 times iteration, the weight map as obtained after circulation guiding filtering operation is as Rn
To characteristic image PnCirculation guiding filtering is carried out, and normalized process is represented by:
Rn=RGF (Pn)
Wherein, RenFor the weight map picture finally obtained, RGF represents circulation guiding filtering operation;
3.4 are directed to N width source images, repeat 3.1-3.3 process, obtain N number of weight map pictureThen will obtain Weight map as corresponding detail pictures multiplication Weighted Fusion, the detail pictures after being merged:
Smoothed image P after step 4, the fusion for obtaining step 2FDetail pictures D after the fusion obtained with step 3FIt is inverse Conversion, fused images are obtained after addition:
F=PF+DF
Embodiment
Merged according to the method described above for two width source images, the image after being merged.Multi-focus figure as shown in Figure 5 As fusion, Medical image fusion shown in Fig. 6, visible ray shown in Fig. 7-infrared image fusion.
The present invention has carried out evaluation index emulation, and and discrete wavelet transformer to the fusion results of accompanying drawing 5, accompanying drawing 6 and accompanying drawing 7 Change (discrete wavelet transform, DWT), warp wavelet (curvelet transform, CVT) is contrasted, Obtain such as the result of following table one, table two and table three.Wherein, the evaluation index of selection is:
(1)Qmi:A kind of evaluating based on information theory, the mutual information of image can be calculated;
(2)Qy:Retain the integrated degree of original image structural information;
(3)EN:The number of the average information of video source;
(4)SSIM:Structural similarity, it measures image similarity in terms of brightness, contrast, structure three respectively.
Commenting using the inventive method, wavelet transform and warp wavelet when table one is 5 multi-focus image fusion of accompanying drawing Valency parameter comparison
Using the evaluation of the inventive method, wavelet transform and warp wavelet when table two is 6 Medical image fusion of accompanying drawing Parameter comparison
Table three uses the inventive method, wavelet transform and warp wavelet when being 7 visible rays of accompanying drawing-infrared image fusion Evaluating contrast
Be directed to different types of source images it can be seen from table one, two, three, it is proposed by the present invention based on rarefaction representation and The fused images that the image interfusion method of circulation guiding filtering obtains are respectively provided with excellent performance.Wherein, evaluation index Qy, SSIM The fusion degree at main measurement picture structure edge, in the fusion results of three kinds of different types of source images, the present invention is most It is excellent;Evaluation index Qmi, EN mainly measure image carry information content number, the fusion results of three kinds of different types of source images In, the present invention is mostly in optimal effectiveness, and minority also achieves preferable effect.

Claims (2)

1. a kind of image interfusion method based on rarefaction representation and circulation guiding filtering, comprises the following steps:
Step 1, source images are decomposed into by smoothed image and detail pictures using mean filter;
Step 2, the smoothed image for obtaining step 1 are using the method fusion based on rarefaction representation:
The smoothed image that step 1 is obtained first resolves into a series of images block, and each image block is converted into column vector and subtracted Go its column vector average average value is classified as 0, the vectorial sparse coefficient obtained by OMP Algorithm for Solving, then pass through " max- L1 " rule fusion sparse coefficients, the image block that will be merged after the sparse coefficient inverse transformation after fusion, according to different images block Position grouping after merged after smoothed image;
Step 3, the detail pictures for obtaining step 1 use the method fusion based on circulation guiding filtering:
Pending image is built using DoG operators to source images, and circulation guiding filtering is carried out to pending image;Then through returning One change obtains weight coefficient, the detail view after obtained weight coefficient is merged with corresponding detail pictures weighting summation Picture;
Step 4, step 2 is merged after smoothed image merged with step 3 after detail pictures be added after, obtain fused images.
2. a kind of image interfusion method based on rarefaction representation and circulation guiding filtering, specifically includes following steps:
The decomposition of step 1, source images:
Source images are decomposed into by smoothed image and detail pictures using mean filter, are specially:
Pn=In*Lave
Dn=In-Pn
Wherein, InFor the n-th width source images, LaveFor mean filter, PnFor the smoothed image corresponding with the n-th width source images, Dn For the detail pictures corresponding with the n-th width source images, n=1,2 ..., N;
The fusion of step 2, smoothed image
2.1 use sliding window technique, according to the order from the upper left corner to the lower right corner by PnIt is decomposed intoThe image of size Block, set PnResolve into T image blockWherein, l is study dictionary D width;
2.2 for each image blockIt is converted into column vectorThen column vector is subtractedThe average of middle element makes it Average value is classified as 0, obtains
Wherein, 1 be the 1 × l for being all 1 column vector,It is correspondingAll elements average;
2.3 use what OMP algorithms calculation procedure 2.2 obtainedSparse coefficientObject function is as follows:
Wherein, D is study dictionary, and ε is serious forgiveness;
2.4 repeat step 2.1-2.3 method, obtain correspondence image block in N width smoothed imagesSparse coefficientSo Afterwards using the sparse coefficient of " max-L1 " rule fusion N width smoothed images, corresponding i-th of image block in N width smoothed images is obtained Fusion sparse coefficient:
Wherein,The sparse coefficient being calculated for step 2.3,For obtained fusion sparse coefficient;
2.5 according to the fusion sparse coefficient that i-th of image block is corresponded in obtained N width smoothed imagesInverse transformation obtains its fusion Column vector
Wherein, 1 be the 1 × l for being all 1 column vector,For corresponding column vectorAverage;
2.6 are directed to T image blockIt can obtain TBy eachRemold as image blockThen according to different figures As the position of block is by TReconfigure, the smoothed image P after being mergedF
The fusion of step 3, detail pictures
3.1 are directed to each width source images, and pending image S is built using DoG operatorsn
Sn=abs (In*Gσ1-In*Gσ2)
Wherein, G is Gaussian filter, and σ 1, σ 2 are the Gaussian kernel of various criterion difference, SnFor n-th pending image, n=1, 2,…,N;
3.2 are directed to n-th pending image SnIt is compared to each other to obtain characteristic image Pn, it is specially:
Wherein,The value of pixel k in n-th pending image is represented,The value of pixel k in the n-th width characteristic image is represented, K=1,2 ..., K, K are pending image SnWith characteristic image PnIn pixel total number, PnFor n-th pending image Corresponding characteristic image, n=1,2 ..., N, N represent the quantity of source images;
The characteristic image P that 3.3 pairs of steps 3.2 obtainnCirculation guiding filtering is carried out, and is normalized:
Rn=RGF (Pn)
Wherein, RenFor the weight map picture finally obtained, RGF represents circulation guiding filtering operation;
3.4 are directed to N width source images, repeat 3.1-3.3 process, obtain N number of weight map pictureThen the weight that will be obtained The corresponding detail pictures fusion of image, the detail pictures after being merged:
Smoothed image P after step 4, the fusion for obtaining step 2FDetail pictures D after the fusion obtained with step 3FAfter addition Obtain fused images:
F=PF+DF
CN201710724551.6A 2017-08-22 2017-08-22 A kind of image interfusion method based on rarefaction representation and circulation guiding filtering Pending CN107689038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710724551.6A CN107689038A (en) 2017-08-22 2017-08-22 A kind of image interfusion method based on rarefaction representation and circulation guiding filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710724551.6A CN107689038A (en) 2017-08-22 2017-08-22 A kind of image interfusion method based on rarefaction representation and circulation guiding filtering

Publications (1)

Publication Number Publication Date
CN107689038A true CN107689038A (en) 2018-02-13

Family

ID=61153608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710724551.6A Pending CN107689038A (en) 2017-08-22 2017-08-22 A kind of image interfusion method based on rarefaction representation and circulation guiding filtering

Country Status (1)

Country Link
CN (1) CN107689038A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754384A (en) * 2018-12-18 2019-05-14 电子科技大学 A kind of uncooled ir divides the infrared polarization image interfusion method of focal plane arrays (FPA)
CN110211080A (en) * 2019-05-24 2019-09-06 南昌航空大学 It is a kind of to dissect and functional medicine image interfusion method
CN110738677A (en) * 2019-09-20 2020-01-31 清华大学 Full-definition imaging method and device for camera and electronic equipment
CN110956592A (en) * 2019-11-14 2020-04-03 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111833284A (en) * 2020-07-16 2020-10-27 昆明理工大学 Multi-source image fusion method based on low-rank decomposition and convolution sparse coding
CN115358963A (en) * 2022-10-19 2022-11-18 季华实验室 Image fusion method based on extended Gaussian difference and guided filtering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077761A (en) * 2014-06-26 2014-10-01 桂林电子科技大学 Multi-focus image fusion method based on self-adaption sparse representation
US9152881B2 (en) * 2012-09-13 2015-10-06 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries
CN106447640A (en) * 2016-08-26 2017-02-22 西安电子科技大学 Multi-focus image fusion method based on dictionary learning and rotating guided filtering and multi-focus image fusion device thereof
CN106886986A (en) * 2016-08-31 2017-06-23 电子科技大学 Image interfusion method based on the study of self adaptation group structure sparse dictionary

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9152881B2 (en) * 2012-09-13 2015-10-06 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries
CN104077761A (en) * 2014-06-26 2014-10-01 桂林电子科技大学 Multi-focus image fusion method based on self-adaption sparse representation
CN106447640A (en) * 2016-08-26 2017-02-22 西安电子科技大学 Multi-focus image fusion method based on dictionary learning and rotating guided filtering and multi-focus image fusion device thereof
CN106886986A (en) * 2016-08-31 2017-06-23 电子科技大学 Image interfusion method based on the study of self adaptation group structure sparse dictionary

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QI ZHANG ET AL: ""Rolling Guidance Filter"", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 *
SHUTAO LI ET AL: ""Image Fusion with Guided Filtering"", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754384A (en) * 2018-12-18 2019-05-14 电子科技大学 A kind of uncooled ir divides the infrared polarization image interfusion method of focal plane arrays (FPA)
CN109754384B (en) * 2018-12-18 2022-11-22 电子科技大学 Infrared polarization image fusion method of uncooled infrared focal plane array
CN110211080A (en) * 2019-05-24 2019-09-06 南昌航空大学 It is a kind of to dissect and functional medicine image interfusion method
CN110738677A (en) * 2019-09-20 2020-01-31 清华大学 Full-definition imaging method and device for camera and electronic equipment
CN110956592A (en) * 2019-11-14 2020-04-03 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111833284A (en) * 2020-07-16 2020-10-27 昆明理工大学 Multi-source image fusion method based on low-rank decomposition and convolution sparse coding
CN111833284B (en) * 2020-07-16 2022-10-14 昆明理工大学 Multi-source image fusion method based on low-rank decomposition and convolution sparse coding
CN115358963A (en) * 2022-10-19 2022-11-18 季华实验室 Image fusion method based on extended Gaussian difference and guided filtering
CN115358963B (en) * 2022-10-19 2022-12-27 季华实验室 Image fusion method based on extended Gaussian difference and guided filtering

Similar Documents

Publication Publication Date Title
CN107689038A (en) A kind of image interfusion method based on rarefaction representation and circulation guiding filtering
CN104809734B (en) A method of the infrared image based on guiding filtering and visual image fusion
CN106204447A (en) The super resolution ratio reconstruction method with convolutional neural networks is divided based on total variance
CN107451984A (en) A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis
CN103942769B (en) A kind of satellite remote-sensing image fusion method
CN106709948A (en) Quick binocular stereo matching method based on superpixel segmentation
CN103854267A (en) Image fusion and super-resolution achievement method based on variation and fractional order differential
CN104008533B (en) Multisensor Image Fusion Scheme based on block adaptive signature tracking
CN110097617B (en) Image fusion method based on convolutional neural network and significance weight
Zhang et al. A multi-modal image fusion framework based on guided filter and sparse representation
CN107977951A (en) The multispectral and hyperspectral image fusion method decomposed based on Coupling Tensor
CN107274360B (en) A kind of high spectrum image denoising method based on Fisher dictionary learning, low-rank representation
CN102968781A (en) Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation
CN102855616A (en) Image fusion method based on multi-scale dictionary learning
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
Bhataria et al. A review of image fusion techniques
CN101052993A (en) Multi-scale filter synthesis for medical image registration
CN104217430B (en) Image significance detection method based on L1 regularization
CN105809650A (en) Bidirectional iteration optimization based image integrating method
Zhang et al. Infrared small target detection based on gradient correlation filtering and contrast measurement
CN105354798B (en) SAR image denoising method based on geometry priori and dispersion similarity measure
CN110060220A (en) Based on the image de-noising method and system for improving BM3D algorithm
CN106971382B (en) A kind of SAR image speckle suppression method
CN105303538A (en) Gauss noise variance estimation method based on NSCT and PCA
CN105528772B (en) A kind of image interfusion method based on directiveness filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180213