CN103578092A - Multi-focus image fusion method - Google Patents

Multi-focus image fusion method Download PDF

Info

Publication number
CN103578092A
CN103578092A CN201310562341.3A CN201310562341A CN103578092A CN 103578092 A CN103578092 A CN 103578092A CN 201310562341 A CN201310562341 A CN 201310562341A CN 103578092 A CN103578092 A CN 103578092A
Authority
CN
China
Prior art keywords
image
fused
matrix
pixel
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310562341.3A
Other languages
Chinese (zh)
Inventor
陈莉
张永新
赵志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwest University
Original Assignee
Northwest University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest University filed Critical Northwest University
Priority to CN201310562341.3A priority Critical patent/CN103578092A/en
Publication of CN103578092A publication Critical patent/CN103578092A/en
Priority to CN201410280417.8A priority patent/CN104036479B/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a multi-focus image fusion method. According to the multi-focus image fusion method, a source image is fused through a fusion algorithm based on NMF, and then a temporary fused image is obtained; the difference between the temporary fused image and the source image is obtained, and a difference image of the temporary fused image and the source image is obtained; gradient energy of a neighboring window of each pixel of the difference image is calculated, a decision matrix is built according to the gradient energy of the neighboring window of the pixels of the difference image, corresponding pixels in the source image are fused according to a certain fusion rule, and a fused image is obtained. According to the multi-focus image fusion method, secondary fusion is carried out on the source image, the temporary fused image is built by extracting global features of the source image, then the difference between the temporary fused image and the source image is obtained, focus regional characteristics of the source images are detected and judged accurately through the gradient energy of the difference image, and then the quality of the fused image is improved.

Description

A kind of multi-focus image fusing method
Technical field
The invention belongs to technical field of image processing, what be specifically related to is a kind of multi-focus image fusing method.
Background technology
Multi-focus image fusion be exactly to through obtain under the identical image-forming condition of registration about several focusedimages in a certain scene, adopt certain blending algorithm to extract clear area separately, and these regions are merged and generate in these scenes of a width all images clearly of all objects.In fields such as traffic, medical treatment, safety, logistics, be widely used.Can effectively improve utilization factor and the reliability of system to object table detection and identify of sensor image information.
Pixel-level image fusion directly adopts suitable blending algorithm to carry out fusion treatment on original image pixels gray space, and fundamental purpose is to cut apart with Images Classification and process and provide support for follow-up figure image intensifying, image.Pixel-level image fusion algorithm is compared with decision level image co-registration with feature level image co-registration, and accuracy is high, and information loss is minimum, the detailed information that can provide more feature levels and decision level image co-registration not to have.
Along with the development of computing machine and imaging technique, formed gradually in recent years following several comparatively conventional Pixel-level Multi-focus image fusion:
(1) multi-focus image fusing method based on wavelet transformation (Discrete Wavelet Transform, DWT).Its main process is that source images is carried out to wavelet decomposition, then adopts suitable fusion rule, and high and low frequency coefficient is merged, and the wavelet coefficient after merging is carried out to wavelet inverse transformation and obtain fused images.The method has good time-frequency local characteristics, has obtained good effect, but DWT can not make full use of the geometric properties that view data itself has, can not be optimum or the presentation video of " sparse ".
(2) multi-focus image fusing method of the profile wave convert based on non-lower sampling (Nonsubsampled Contourlet Transform, NSCT).Its main process is that source images is carried out to NSCT decomposition, then adopts suitable fusion rule, and high and low frequency coefficient is merged, and the wavelet coefficient after merging is carried out to NSCT inverse transformation and obtain fused images.The method can obtain good syncretizing effect, but travelling speed is slower, and coefficient of dissociation need to take a large amount of storage spaces.
(3) multi-focus image fusing method based on principal component analysis (PCA) (Principal Component Analysis, PCA).Its main process is according to row major or row, preferentially to convert source images to column vector, and calculate covariance, according to covariance matrix, ask for proper vector, determine first principal component characteristic of correspondence vector and determine accordingly the weight that each source images merges, according to weight, be weighted fusion.The method is simple, and travelling speed is fast, but easily reduces fused images contrast, affects fused image quality.
(4) multi-focus image fusing method based on spatial frequency (Spatial Frequency, SF).Its main process is source images to be carried out to piece cut apart, and then calculates each piece SF, and the SF of contrast source images corresponding blocks merges the large correspondence image piece of SF value to obtain fused images.The method is simply easy to implement, but a minute block size is difficult to self-adaptation to be determined, piecemeal is too large, easily the pixel outside focus is all comprised to come in, reduce fusion mass, fused images contrast is declined, easily produce blocking effect, piecemeal is too little characterizes limited ability to region readability, is prone to the wrong choice of piece, and to noise-sensitive.
(5) multi-focus image fusing method based on Pulse Coupled Neural Network (Pulse Coupled Neural Network, PCNN).Its main process is the outside input stimulus using single grey scale pixel value as PCNN, according to the igniting figure of PCNN, calculates spark rate corresponding to input pixel, and the pixel with larger spark rate is merged, and obtains fused images.The method can realize information transmission and information coupling automatically, and its result can better retain the feature of figure itself.But the method parameter is more, model is complicated, and running and comparing is consuming time, in addition, human eye vision is more responsive and also insensitive to the brightness of single pixel to the variation of image border, and the gray-scale value of the single pixel fused images effect that neuronic outside input produces as PCNN is unsatisfactory.
Above-mentioned Lung biopsy is the multi-focus image fusing method of comparatively commonly using, but in these methods, wavelet transformation (DWT) can not make full use of the geometric properties that view data itself has, can not be optimum or the presentation video of " sparse ", easily cause fused images to occur skew and information dropout phenomenon.Profile wave convert based on non-lower sampling (NSCT) method is because decomposable process is complicated, and travelling speed is slower, and coefficient of dissociation need to take a large amount of storage spaces in addition.Principal component analysis (PCA) (PCA) method easily reduces fused images contrast, affects fused image quality.Pulse Coupled Neural Network (PCNN) method parameter is more, and model is complicated, and running and comparing is consuming time.Commonly use for these five kinds and all exist different shortcomings, between speed and fusion mass, be difficult to be in harmonious proportion, limited application and the popularization of these methods.
Summary of the invention
Technical matters to be solved by this invention be in multi-focus image fusion field blending algorithm image characteristics extraction and minutia are represented clear not, can not adaptively selected block size and blocking effect of occurring etc., the problem that syncretizing effect is not satisfactory.For this reason, the invention provides a kind of multi-focus image fusing method, the multiple focussing image I of the method after to registration aand I bmerge I aand I bbe gray level image, and
Figure BDA0000412076400000031
Figure BDA0000412076400000032
be that size is the space of M * N, M and N are positive integer, and this fusion method comprises the following steps:
(1) build multiple focussing image I aand I bobserving matrix V;
(2) with Algorithms of Non-Negative Matrix Factorization, observing matrix V is decomposed, obtain basis matrix W;
(3) basis matrix W being converted to size is the matrix of M * N, and the image that this matrix is corresponding is interim fused images I 0;
(4) respectively by interim fused images I 0with source images I aand I bdiffer from, obtain differential image D awith differential image D b, wherein: D a=I 0-I a, D b=I 0-I b;
(5) calculated difference image D awith differential image D bgradient energy in each neighborhood of pixels, Size of Neighborhood is 5 * 5 or 7 * 7;
(6) construction feature matrix H,
Figure BDA0000412076400000041
(formula 1)
In (formula 1):
EODG a(i, j) is differential image D agradient energy in pixel (i, j) neighborhood;
EODG b(i, j) is differential image D bgradient energy in pixel (i, j) neighborhood;
i=1,2,3,…,M;j=1,2,3,…,N;
H (i, j) is that matrix H i is capable, the element of j row;
(7) build fused images F,
Figure BDA0000412076400000042
gray level image after being merged:
F ( i , j ) = I A ( i , j ) H ( i , j ) = 1 I B ( i , j ) H ( i , j ) = 0 (formula 2)
In (formula 2):
The gray-scale value that F (i, j) locates for the gray level image F pixel (i, j) after merging;
I a(i, j) is gray level image I before merging athe gray-scale value located of pixel (i, j);
I b(i, j) is gray level image I before merging bthe gray-scale value located of pixel (i, j).
The eigenmatrix building in step (6) is corroded to expansive working and process, and utilize the eigenmatrix after processing to build fused images.
Compared with prior art, the invention has the beneficial effects as follows:
(1) first the present invention carries out Non-negative Matrix Factorization to source images, carry out image co-registration for the first time, then the fusion results obtaining is poor with source images respectively, by the energy gradient in each neighborhood of pixels relatively, the focus characteristics of each sub-block in source images is judged, built decision matrix and merge for the second time.Source images is carried out to secondary fusion, improved the accuracy rate that each sub-block focus characteristics of source images is judged, be conducive to the extraction of clear area target, can effectively suppress blocking effect, simultaneously better to image detail character representation effect.
(2) in the present invention, image co-registration framework is flexible, can be used for the image co-registration task of other types.
To sum up, algorithm frame of the present invention is flexible, the focus characteristics of source images sub-block is judged to have higher accuracy rate, and effectively suppress blocking effect, can extract comparatively accurately clear area target detail, and it is clear that image detail is expressed, and improves fused image quality.
Accompanying drawing explanation
Below in conjunction with accompanying drawing and embodiment, the present invention is further explained to explanation.
Fig. 1 is the source images to be merged of embodiment 1;
Fig. 2 is the Non-negative Matrix Factorization (NMF of Laplce (LAP), wavelet transformation (DWT), the profile wave convert based on non-lower sampling (NSCT), spatial frequency (SF), Non-negative Matrix Factorization (NMF), local Non-negative Matrix Factorization (LNMF), sparse Non-negative Matrix Factorization (SNMF), degree of rarefication constraint sC), the syncretizing effect figure of nine kinds of image interfusion methods of the present invention (Propose) to multiple focussing image ' clock ' Fig. 1, Proposed represents method of the present invention;
Fig. 3 is the Non-negative Matrix Factorization (NMF of Laplce (LAP), wavelet transformation (DWT), the profile wave convert based on non-lower sampling (NSCT), spatial frequency (SF), Non-negative Matrix Factorization (NMF), local Non-negative Matrix Factorization (LNMF), sparse Non-negative Matrix Factorization (SNMF), degree of rarefication constraint sC), nine kinds of image interfusion methods of the present invention (Propose) are to the difference comparison diagram between each fused images of multiple focussing image ' clock ' and source images Fig. 1 (b);
Fig. 4 is the fused images for the treatment of of embodiment 2;
Fig. 5 is the Non-negative Matrix Factorization (NMF of Laplce (LAP), wavelet transformation (DWT), the profile wave convert based on non-lower sampling (NSCT), spatial frequency (SF), Non-negative Matrix Factorization (NMF), local Non-negative Matrix Factorization (LNMF), sparse Non-negative Matrix Factorization (SNMF), degree of rarefication constraint sC), nine kinds of image interfusion methods of the present invention (Propose) are to multiple focussing image ' book ' Fig. 3 (a) and fused images (b);
Fig. 6 is the Non-negative Matrix Factorization (NMF of Laplce (LAP), wavelet transformation (DWT), the profile wave convert based on non-lower sampling (NSCT), spatial frequency (SF), Non-negative Matrix Factorization (NMF), local Non-negative Matrix Factorization (LNMF), sparse Non-negative Matrix Factorization (SNMF), degree of rarefication constraint sC), nine kinds of image interfusion methods of the present invention (Propose) are to the difference comparison diagram between each fused images of multiple focussing image ' book ' and source images Fig. 3 (b).
Embodiment
In order to overcome tile size in multi-focus image fusion field, can not self-adaptation divide, details is unintelligible, part loss in detail, the not satisfactory problem of degradation shortcomings and syncretizing effect under contrast, the invention provides a kind of multi-focus image fusing method based on Non-negative Matrix Factorization, the concrete operations flow-interpret of the method is as follows:
Method of the present invention is to I a,
Figure BDA0000412076400000062
two width multiple focussing images merge, and two width multiple focussing image sizes are M * N: utilize vectorial conversion operations by image I a,
Figure BDA0000412076400000063
be converted to column vector, the merging of gained column vector being carried out to matrix forms observing matrix V:
V = V A V B = v 1 A v 1 B v 2 A v 2 B . . . . . . v qA v qB . . . . . . v MNA v MNB
Wherein: V afor source images I aimage array be converted to the matrix after column vector, V bfor source images I bimage array be converted to the matrix after column vector, v qArepresent source images I aimage array be converted to the element after column vector, v qBrepresent source images I bimage array be converted to the element after column vector, q=1,2,3 ..., MN.
The present invention carries out Non-negative Matrix Factorization (NMF) to observing matrix V, obtains basis matrix W, and observing matrix V can be by this basis matrix W linear expression, and the base vector number of basis matrix W is 1, and this base vector can intactly represent whole features of observing matrix V.
The present invention utilizes matrix conversion operation to convert basis matrix W to interim fused images I 0∈ R m * N, now, interim fused images I 0∈ R m * Nhomology image I a,
Figure BDA0000412076400000071
corresponding, and in the same size.
Differential image D of the present invention a, D b∈ R m * Nthrough interim fused images I 0∈ R m * Ndifference homology image I a,
Figure BDA0000412076400000072
work difference obtains, differential image D a=I 0-I a, differential image D b=I 0-I b, the differential image D obtaining aand D bbe the images that two width sizes are identical, size is M * N, and during gradient energy in calculating two differential images in respective pixel neighborhood, the position coordinates of these two pixels in differential image is separately identical.
Gradient energy of the present invention (EODG) computing method are shown below:
EODG ( α , β ) = Σ k = - ( K - 1 ) / 2 ( K - 1 ) / 2 Σ l = - ( L - 1 ) / 2 ( L - 1 ) / 2 ( f α + k 2 + f β + 1 2 )
f α+k=[f 0(α+k+1,β)-f(α+k+1,β)]-[f 0(α+k,β)-f(α+k,β)]
f β+l=[f 0(α,β+l+1)-f(α,β+l+1)]-[f 0(α,β+l)-f(α,β+l)]
Wherein:
K * L is the size of pixel (α, β) neighborhood, and value is 5 * 5 or 7 * 7;
-(K-1)/2≤k≤(K-1)/2, and k round numbers;
-(L-1)/2≤l≤-(L-1)/2, and l round numbers;
F (α, β) is the gray-scale value of pixel (α, β) in source images;
F 0(α, β) is the gray-scale value of pixel (α, β) in interim fused images.
Decision matrix H of the present invention,
Figure BDA0000412076400000075
in, " 1 " represents source images I apixel be in focal zone, " 0 " represents source images I bpixel be in focal zone.
Owing to relying on separately gradient energy as the evaluation criterion of image definition, may not extract all clear sub-blocks completely, the interregional burr that exists in decision matrix, blocks and narrow adhesion, need to carry out morphologic corrosion expansive working to decision matrix.
Below the embodiment that inventor provides, so that technical scheme of the present invention is further explained to explanation.
Embodiment 1:
Follow technical scheme of the present invention, to Fig. 1 (a) and (b), two width source images carry out fusion treatment to this embodiment, and result is as shown in the Propose in Fig. 2.Utilize the Non-negative Matrix Factorization (NMF of Laplce (LAP), wavelet transformation (DWT), the profile wave convert based on non-lower sampling (NSCT), spatial frequency (SF), Non-negative Matrix Factorization (NMF), local Non-negative Matrix Factorization (LNMF), sparse Non-negative Matrix Factorization (SNMF), degree of rarefication constraint simultaneously sC) eight kinds of image interfusion methods to Fig. 1 (a) with (b) shown in two width source images carry out fusion treatment, result as shown in Figure 2, is carried out quality assessment to the fused images of different fusion methods, processes and calculates result shown in table 1.
The evaluation of table 1 multiple focussing image ' clock ' fused image quality.
Method MI Q AB/F Q 0 Q W Q E Running?Time(s)
LAP 6.9328 0.6872 0.7641 0.9154 0.5268 0.8490
DWT 5.8622 0.6317 0.6964 0.8802 0.492 0.4699
NSCT 6.5346 0.6603 0.7566 0.8850 0.4944 205.3386
SF 7.6723 0.6768 0.7952 0.8957 0.4848 0.8953
NMF 6.8719 0.5875 0.7770 0.8594 0.3796 1.2612
LNMF 6.9078 0.6601 0.7706 0.8566 0.3809 11.5779
SNMF 6.8703 0.5801 0.7754 0.8561 0.3811 13.1989
NMFsc 6.8677 0.5699 0.7674 0.8487 0.3761 15.8415
Propose 8.4516 0.7351 0.7509 0.8689 0.5439 10.7145
Embodiment 2:
Follow technical scheme of the present invention, to Fig. 4 (a) and (b), two width source images carry out fusion treatment to this embodiment, and result is as shown in the Propose in Fig. 5.
Utilize the Non-negative Matrix Factorization (NMF of Laplce (LAP), wavelet transformation (DWT), the profile wave convert based on non-lower sampling (NSCT), spatial frequency (SF), Non-negative Matrix Factorization (NMF), local Non-negative Matrix Factorization (LNMF), sparse Non-negative Matrix Factorization (SNMF), degree of rarefication constraint simultaneously sC) eight kinds of image interfusion methods to two width source images (a) shown in Fig. 4 with (b) carry out fusion treatment, result as shown in Figure 5, is carried out quality assessment to the fused images of different fusion methods in Fig. 5, processes and calculates result shown in table 2.
The evaluation of table 2 multiple focussing image ' book ' fused image quality.
Figure 2013105623413100002DEST_PATH_IMAGE001
In table 1 and table 2: Method represents method; Fusion method comprises eight kinds respectively: the Non-negative Matrix Factorization (NMF of Laplce (LAP), wavelet transformation (DWT), the profile wave convert based on non-lower sampling (NSCT), spatial frequency (SF), Non-negative Matrix Factorization (NMF), local Non-negative Matrix Factorization (LNMF), sparse Non-negative Matrix Factorization (SNMF), degree of rarefication constraint sC); Running Time represents working time, and unit is second.MI represents mutual information, is the fused image quality objective evaluation index based on mutual information.Q aB/Fthe marginal information total amount that representative is shifted from source images; Q 0represent the degreeof tortuosity of fused images; Q wrepresent that fused images shifts the degree of remarkable information from source images; Q erepresent that fused images shifts the degree of visual information and marginal information from source images, the desired value of Q index is larger, represents that the quality of fused images is better.
From Fig. 3 and Fig. 5, can find out, all there is drift in various degree and fuzzy in other method, and method of the present invention is obviously better than the syncretizing effect of other fusion methods to the fused images of multiple focussing image Fig. 1 ' clock ' and multiple focussing image Fig. 2 ' book '.
Difference design sketch Fig. 3 from fused images Fig. 2 and Fig. 1 (b) source images, and difference design sketch Fig. 6 of fused images Fig. 5 and Fig. 4 (b) source images can find out, this method is obviously better than additive method to the extractability of source images focus area object edge and texture, can be good at the target information of focus area in source images to transfer in fused images and go.Can effectively catch the target detail information of focal zone, improve image co-registration quality.
More than exemplifying is only to illustrate of the present invention, does not form the restriction to protection scope of the present invention, within the every and same or analogous design of the present invention all belongs to protection scope of the present invention.

Claims (2)

1. a multi-focus image fusing method, the multiple focussing image I of the method after to registration aand I bmerge I aand I bbe gray level image, and I a,
Figure FDA0000412076390000011
Figure FDA0000412076390000012
be that size is the space of M * N, M and N are positive integer, it is characterized in that, this fusion method comprises the following steps:
(1) build multiple focussing image I aand I bobserving matrix V;
(2) with Algorithms of Non-Negative Matrix Factorization, observing matrix V is decomposed, obtain basis matrix W;
(3) basis matrix W being converted to size is the matrix of M * N, and the image that this matrix is corresponding is interim fused images I 0;
(4) respectively by interim fused images I 0with source images I aand I bdiffer from, obtain differential image D awith differential image D b, wherein: D a=I 0-I a, D b=I 0-I b;
(5) calculated difference image D awith differential image D bgradient energy in each neighborhood of pixels, Size of Neighborhood is 5 * 5 or 7 * 7;
(6) construction feature matrix H,
Figure FDA0000412076390000013
Figure FDA0000412076390000014
(formula 1)
In (formula 1):
EODG a(i, j) is differential image D agradient energy in pixel (i, j) neighborhood;
EODG b(i, j) is differential image D bgradient energy in pixel (i, j) neighborhood;
i=1,2,3,…,M;j=1,2,3,…,N;
H (i, j) is that matrix H i is capable, the element of j row;
(7) build fused images F,
Figure FDA0000412076390000015
gray level image after being merged
F ( i , j ) = I A ( i , j ) H ( i , j ) = 1 I B ( i , j ) H ( i , j ) = 0 (formula 2)
In (formula 2):
The gray-scale value that F (i, j) locates for the gray level image F pixel (i, j) after merging;
I a(i, j) is gray level image I before merging athe gray-scale value located of pixel (i, j);
I b(i, j) is gray level image I before merging bthe gray-scale value located of pixel (i, j).
2. multi-focus image fusing method as claimed in claim 1, is characterized in that, the eigenmatrix building is corroded to expansive working and process, and utilize the eigenmatrix after processing to build fused images in step (6).
CN201310562341.3A 2013-11-11 2013-11-11 Multi-focus image fusion method Pending CN103578092A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310562341.3A CN103578092A (en) 2013-11-11 2013-11-11 Multi-focus image fusion method
CN201410280417.8A CN104036479B (en) 2013-11-11 2014-06-20 Multi-focus image fusion method based on non-negative matrix factorization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310562341.3A CN103578092A (en) 2013-11-11 2013-11-11 Multi-focus image fusion method

Publications (1)

Publication Number Publication Date
CN103578092A true CN103578092A (en) 2014-02-12

Family

ID=50049818

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201310562341.3A Pending CN103578092A (en) 2013-11-11 2013-11-11 Multi-focus image fusion method
CN201410280417.8A Expired - Fee Related CN104036479B (en) 2013-11-11 2014-06-20 Multi-focus image fusion method based on non-negative matrix factorization

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201410280417.8A Expired - Fee Related CN104036479B (en) 2013-11-11 2014-06-20 Multi-focus image fusion method based on non-negative matrix factorization

Country Status (1)

Country Link
CN (2) CN103578092A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318532A (en) * 2014-10-23 2015-01-28 湘潭大学 Secondary image fusion method combined with compressed sensing
CN106846287A (en) * 2017-01-13 2017-06-13 西京学院 A kind of multi-focus image fusing method based on biochemical ion exchange model
CN107004003A (en) * 2015-11-16 2017-08-01 华为技术有限公司 model parameter fusion method and device
CN108171676A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Multi-focus image fusing method based on curvature filtering
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF
CN108416163A (en) * 2018-03-23 2018-08-17 湖南城市学院 A method of three-dimensional panorama indoor design is generated based on number connection platform
CN108510465A (en) * 2018-01-30 2018-09-07 西安电子科技大学 The multi-focus image fusing method indicated based on consistency constraint non-negative sparse
CN109360179A (en) * 2018-10-18 2019-02-19 上海海事大学 A kind of image interfusion method, device and readable storage medium storing program for executing
CN109767414A (en) * 2019-01-18 2019-05-17 湖北工业大学 A kind of multi-focus image fusing method based on gray scale median reference

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680179B (en) * 2015-03-09 2018-06-26 西安电子科技大学 Method of Data with Adding Windows based on neighborhood similarity
CN104952048B (en) * 2015-06-09 2017-12-08 浙江大学 A kind of focus storehouse picture synthesis method based on as volume reconstruction
CN108734686A (en) * 2018-05-28 2018-11-02 成都信息工程大学 Multi-focus image fusing method based on Non-negative Matrix Factorization and visual perception
CN109509163B (en) * 2018-09-28 2022-11-11 洛阳师范学院 FGF-based multi-focus image fusion method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1272746C (en) * 2004-07-15 2006-08-30 上海交通大学 Multiple focus image fusing method based inseparable small wave frame change
CN100573584C (en) * 2008-01-18 2009-12-23 西安电子科技大学 Based on imaging mechanism and non-sampling Contourlet conversion multi-focus image fusing method

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318532B (en) * 2014-10-23 2017-04-26 湘潭大学 Secondary image fusion method combined with compressed sensing
CN104318532A (en) * 2014-10-23 2015-01-28 湘潭大学 Secondary image fusion method combined with compressed sensing
CN107004003A (en) * 2015-11-16 2017-08-01 华为技术有限公司 model parameter fusion method and device
US11373116B2 (en) 2015-11-16 2022-06-28 Huawei Technologies Co., Ltd. Model parameter fusion method and apparatus
CN107004003B (en) * 2015-11-16 2020-04-28 华为技术有限公司 Model parameter fusion method and device
CN106846287A (en) * 2017-01-13 2017-06-13 西京学院 A kind of multi-focus image fusing method based on biochemical ion exchange model
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF
CN108171676A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Multi-focus image fusing method based on curvature filtering
CN108171676B (en) * 2017-12-01 2019-10-11 西安电子科技大学 Multi-focus image fusing method based on curvature filtering
CN108510465B (en) * 2018-01-30 2019-12-24 西安电子科技大学 Multi-focus image fusion method based on consistency constraint non-negative sparse representation
CN108510465A (en) * 2018-01-30 2018-09-07 西安电子科技大学 The multi-focus image fusing method indicated based on consistency constraint non-negative sparse
CN108416163A (en) * 2018-03-23 2018-08-17 湖南城市学院 A method of three-dimensional panorama indoor design is generated based on number connection platform
CN109360179A (en) * 2018-10-18 2019-02-19 上海海事大学 A kind of image interfusion method, device and readable storage medium storing program for executing
CN109767414A (en) * 2019-01-18 2019-05-17 湖北工业大学 A kind of multi-focus image fusing method based on gray scale median reference

Also Published As

Publication number Publication date
CN104036479B (en) 2017-04-19
CN104036479A (en) 2014-09-10

Similar Documents

Publication Publication Date Title
CN103578092A (en) Multi-focus image fusion method
CN103455991B (en) A kind of multi-focus image fusing method
KR101622344B1 (en) A disparity caculation method based on optimized census transform stereo matching with adaptive support weight method and system thereof
CN103927717B (en) Depth image restoration methods based on modified model bilateral filtering
CN102999913B (en) A kind of sectional perspective matching process based on credible propagation
CN109887021B (en) Cross-scale-based random walk stereo matching method
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN102567973B (en) Image denoising method based on improved shape self-adaptive window
CN106127688B (en) A kind of super-resolution image reconstruction method and its system
CN106600632B (en) A kind of three-dimensional image matching method improving matching cost polymerization
CN111160291B (en) Human eye detection method based on depth information and CNN
CN103458261B (en) Video scene variation detection method based on stereoscopic vision
CN110246151B (en) Underwater robot target tracking method based on deep learning and monocular vision
CN104036481B (en) Multi-focus image fusion method based on depth information extraction
CN107909560A (en) A kind of multi-focus image fusing method and system based on SiR
CN103186894B (en) A kind of multi-focus image fusing method of self-adaptation piecemeal
CN104463870A (en) Image salient region detection method
CN102005054A (en) Real-time infrared image target tracking method
CN104616274A (en) Algorithm for fusing multi-focusing image based on salient region extraction
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN105913407A (en) Method for performing fusion optimization on multi-focusing-degree image base on difference image
CN105138983A (en) Pedestrian detection method based on weighted part model and selective search segmentation
CN109509163A (en) A kind of multi-focus image fusing method and system based on FGF
CN105809673A (en) SURF (Speeded-Up Robust Features) algorithm and maximal similarity region merging based video foreground segmentation method
CN113505670A (en) Remote sensing image weak supervision building extraction method based on multi-scale CAM and super-pixels

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140212